ISO IEC TR 5469-2024 PDF
Name in English:
St ISO IEC TR 5469-2024
Name in Russian:
Ст ISO IEC TR 5469-2024
Original standard ISO IEC TR 5469-2024 in PDF full version. Additional info + preview on request
Full title and description
ISO/IEC TR 5469:2024 — Artificial intelligence — Functional safety and AI systems. A technical report that describes properties, risk factors, methods and processes for integrating AI into safety-related functions, for using non-AI safety functions to protect AI-controlled equipment, and for using AI systems in the design and development of safety-related functions.
Abstract
This technical report surveys characteristics and hazards introduced by AI in safety contexts, identifies related risk factors, and describes available methods, lifecycle processes and assurance approaches for: (1) using AI inside safety‑related functions that realize required functionality; (2) using non‑AI safety functions to ensure safety of AI‑controlled equipment; and (3) using AI systems to support design and development of safety‑related functions.
General information
- Status: Published.
- Publication date: 8 January 2024 (publication effective date).
- Publisher: International Organization for Standardization / International Electrotechnical Commission (ISO/IEC), prepared by JTC 1/SC 42 (Artificial Intelligence).
- ICS / categories: 35.020 (Information technology in general).
- Edition / version: Edition 1.0 (Technical Report).
- Number of pages: 73 pages.
Scope
This report describes properties, related risk factors, and available methods and processes relating to: (a) use of AI inside a safety‑related function to realize the functionality; (b) use of non‑AI safety‑related functions to ensure safety for AI‑controlled equipment; and (c) use of AI systems to design and develop safety‑related functions. The intent is to inform developers, integrators and safety assessors about AI‑specific hazards and mitigation approaches across the safety lifecycle.
Key topics and requirements
- Characterization of AI properties that affect functional safety (opacity, nondeterminism, learning and adaptation).
- Identification of AI‑specific risk factors and hazard sources across the system lifecycle.
- Guidance on verification, validation and testing strategies for AI in safety contexts (robustness, edge cases, performance under distribution shift).
- Architectural and system‑level measures (redundancy, safe‑state transitions, monitoring and fail‑safe mechanisms) to contain AI failures.
- Processes for assurance, documentation and evidence collection to support safety cases when AI components are involved.
- Recommendations on data quality, model maintenance, and post‑deployment monitoring to manage drift and emergent behaviours.
Typical use and users
Intended users include safety engineers, system integrators, AI developers, validation and verification teams, certification and conformity assessment bodies, and regulatory stakeholders working where AI interacts with safety‑critical functions (e.g., automotive, industrial automation, robotics, medical devices and critical infrastructure). The report is advisory (technical report) and aimed at informing integration of AI within established safety frameworks.
Related standards
Relevant and complementary standards and technical reports include ISO/IEC 23894 (Information technology — AI — Guidance on risk management), the ISO/IEC 24029 series on robustness of neural networks (part 1 overview and part 2 methods), ISO/IEC TR 24027 (bias in AI systems), and established functional safety standards such as ISO 26262 (road vehicles) and IEC 61508 (industrial functional safety). These documents are commonly used together when addressing AI in safety‑critical applications.
Keywords
AI, artificial intelligence, functional safety, safety lifecycle, risk factors, hazard analysis, verification and validation, robustness, monitoring, safety architecture, assurance case, ISO/IEC JTC 1/SC 42.
FAQ
Q: What is this standard?
A: ISO/IEC TR 5469:2024 is a technical report titled "Artificial intelligence — Functional safety and AI systems" that provides descriptive guidance on risks, methods and processes for using AI in safety‑related contexts. It is not a normative specification but an advisory report produced by ISO/IEC JTC 1/SC 42.
Q: What does it cover?
A: It covers properties of AI that affect functional safety, AI‑specific risk factors, methods for verification/validation and robustness assessment, architectural measures to mitigate AI failures, and lifecycle processes (development, testing, deployment and monitoring) relevant to safety‑critical systems.
Q: Who typically uses it?
A: Safety engineers, AI developers, system integrators, test and V&V teams, certification bodies and regulators who need to understand and manage AI‑related safety risks across the development and operational lifecycle.
Q: Is it current or superseded?
A: As published on 8 January 2024, ISO/IEC TR 5469:2024 is current (Edition 1.0) and has not been indicated as superseded. Users should check official national bodies or ISO/IEC publications for any amendments or future revisions.
Q: Is it part of a series?
A: It sits within the growing set of ISO/IEC AI deliverables produced by JTC 1/SC 42 and complements other TRs and standards addressing AI risk management, robustness and bias (for example ISO/IEC 23894, ISO/IEC TR 24029 series and ISO/IEC TR 24027). It is a standalone technical report rather than a numbered multi‑part normative standard.
Q: What are the key keywords?
A: Functional safety, AI systems, risk factors, robustness, verification and validation, safety lifecycle, assurance, monitoring, redundancy, ISO/IEC JTC 1/SC 42.