ISO IEC 23894-2023 PDF
Name in English:
St ISO IEC 23894-2023
Name in Russian:
Ст ISO IEC 23894-2023
Original standard ISO IEC 23894-2023 in PDF full version. Additional info + preview on request
Full title and description
ISO/IEC 23894:2023 — Information technology — Artificial intelligence — Guidance on risk management. This international standard provides guidance for organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI), focusing on identifying, assessing, treating and monitoring AI-specific risks and integrating risk management into AI-related activities and functions.
Abstract
This document offers practical guidance to help organizations integrate risk management into AI lifecycles and governance. It describes processes for AI risk identification, analysis, evaluation and treatment, and outlines how to implement risk management effectively in an organization’s AI activities. The guidance is intended to be scalable and adaptable to different organizational contexts.
General information
- Status: Published.
- Publication date: February 2023.
- Publisher: Joint publication by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), developed under ISO/IEC JTC 1/SC 42 (Artificial Intelligence).
- ICS / categories: 35.020 (Information technology).
- Edition / version: Edition 1 (2023-02).
- Number of pages: 26.
Scope
ISO/IEC 23894:2023 provides guidance on managing risks that are specific to AI systems and AI-enabled products and services. It is intended for any organization — regardless of size or sector — that develops, deploys or uses AI, and it supports integration of AI risk management with overall enterprise risk processes and AI governance arrangements. The standard is advisory (guidance) in nature and is designed to be used alongside general risk management standards and AI management system standards.
Key topics and requirements
- AI-specific risk identification across the AI lifecycle (data, models, deployment, operation).
- Risk analysis and evaluation methods tailored to AI hazards and harms (safety, fairness, privacy, security, reliability).
- Risk treatment and mitigation strategies, including technical and organisational controls.
- Integration of AI risk management into governance, roles and responsibilities, and decision‑making processes.
- Monitoring, review and continual improvement of AI risk controls and residual risk tracking.
- Alignment and compatibility with existing risk management frameworks and AI management system standards.
Typical use and users
This guidance is intended for risk managers, AI system developers and engineers, product managers, compliance and governance teams, auditors, procurement teams, and regulators who need to understand and manage AI-specific risks. It is applicable to organizations building AI, integrating third‑party AI components, or using AI-based services and wanting to document, assess and mitigate AI-related risks.
Related standards
ISO/IEC 23894 is intended to complement general and AI-specific management standards. Relevant related standards include ISO 31000 (risk management principles and guidelines), ISO/IEC 42001 (AI management systems — requirements for establishing and maintaining an Artificial Intelligence Management System), and ISO/IEC 22989 (AI concepts and terminology). Implementers commonly use 23894 together with AI management system requirements and foundational AI terminology standards.
Keywords
AI risk management, artificial intelligence, risk identification, risk assessment, risk treatment, governance, ISO/IEC JTC 1/SC 42, AI lifecycle, compliance, trustworthiness.
FAQ
Q: What is this standard?
A: ISO/IEC 23894:2023 is an international guidance standard on managing risks specific to artificial intelligence systems and AI-enabled products and services.
Q: What does it cover?
A: It covers processes and recommendations for identifying, analysing, evaluating, treating, monitoring and integrating AI-related risks into organizational risk management and governance across the AI lifecycle. The standard is guidance (not prescriptive requirements).
Q: Who typically uses it?
A: Developers, system integrators, risk and compliance teams, product owners, auditors and regulators — essentially any organization or stakeholder responsible for developing, deploying or using AI who needs structured AI risk guidance.
Q: Is it current or superseded?
A: As of its publication in February 2023 ISO/IEC 23894:2023 is the current published edition (Edition 1). Users should check for any amendments or later revisions through the national or international standards bodies for the most recent status.
Q: Is it part of a series?
A: Yes — it is part of the growing set of ISO/IEC standards addressing AI (including ISO/IEC 22989 for concepts/terminology and ISO/IEC 42001 for AI management systems) and is intended to be used alongside general risk management guidance such as ISO 31000.
Q: What are the key keywords?
A: AI risk management, governance, lifecycle, mitigation, trustworthiness, compliance, ISO/IEC JTC 1/SC 42.