ISO IEC TS 6254-2025 PDF

St ISO IEC TS 6254-2025

Name in English:
St ISO IEC TS 6254-2025

Name in Russian:
Ст ISO IEC TS 6254-2025

Description in English:

Original standard ISO IEC TS 6254-2025 in PDF full version. Additional info + preview on request

Description in Russian:
Оригинальный стандарт ISO IEC TS 6254-2025 в PDF полная версия. Дополнительная инфо + превью по запросу
Document status:
Active

Format:
Electronic (PDF)

Delivery time (for English version):
1 business day

Delivery time (for Russian version):
365 business days

SKU:
stiso34264

Choose Document Language:
€25

Full title and description

Information technology — Artificial intelligence — Objectives and approaches for explainability and interpretability of machine learning (ML) models and artificial intelligence (AI) systems. This Technical Specification describes objectives stakeholders may have for explainability and interpretability and presents approaches and methods to meet those objectives across the AI system lifecycle.

Abstract

This document provides guidance on explainability and interpretability for ML models and AI systems: it identifies stakeholder objectives (e.g., users, developers, auditors, regulators), maps those objectives to candidate approaches and methods, and discusses applicability throughout the AI system lifecycle as defined in ISO/IEC 22989. It is intended to help practitioners select, document and justify explainability measures and to clarify limits and trade-offs of different approaches.

General information

  • Status: Published.
  • Publication date: September 2025 (2025-09).
  • Publisher: ISO/IEC (ISO/IEC JTC 1/SC 42).
  • ICS / categories: 35.020 (Information technology).
  • Edition / version: Edition 1 (Technical Specification, TS 6254:2025).
  • Number of pages: 69 (approx.).

Scope

Provides guidance on selecting and applying explainability and interpretability approaches for ML models and AI systems to satisfy stakeholder objectives. Coverage includes objectives definition, examples of methods (local/global, model-specific/model-agnostic, surrogate models, feature importance, example-based explanations, visualisations, counterfactuals), applicability across lifecycle stages and practical considerations such as limitations, evaluation, and documentation. The specification references and is intended to be used alongside foundational AI terminology and lifecycle concepts given in ISO/IEC 22989.

Key topics and requirements

  • Definition and categorization of explainability objectives for different stakeholders (e.g., transparency, contestability, verification).
  • Mapping of stakeholder objectives to candidate explanation approaches and methods (local vs global; model-specific vs model-agnostic).
  • Guidance on applicability through the AI lifecycle: development, validation, deployment, operation and retirement.
  • Recommendations for documenting chosen approaches, assumptions, limitations and evaluation criteria.
  • Discussion of trade-offs (e.g., fidelity vs simplicity, privacy and security considerations) and when synthetic or surrogate explanations are acceptable.
  • Practical examples and illustrative use cases to support method selection and evaluation.

Typical use and users

Target users include AI/ML developers and engineers, model validators and QA teams, product managers, auditors and assessors, compliance and risk teams, regulators and policy makers, and researchers. Typical uses are designing explainability requirements, selecting explanation methods for a model or system, documenting explainability choices for audits or procurement, and supporting regulatory or stakeholder communications.

Related standards

Key related ISO/IEC standards and publications that complement TS 6254 include ISO/IEC 22989 (AI concepts and terminology), ISO/IEC 42001 (AI management systems), ISO/IEC 23894 (AI — risk management), ISO/IEC 23053 (framework for AI systems using ML) and related transparency- and taxonomy-focused work (e.g., ISO/IEC 12792). These documents provide terminology, management-system requirements, risk-management guidance and system frameworks that TS 6254 is designed to work with.

Keywords

Explainability, interpretability, explainability objectives, ML model explanations, transparency, stakeholder objectives, local explanations, global explanations, surrogate models, counterfactuals, documentation, AI lifecycle.

FAQ

Q: What is this standard?

A: A Technical Specification providing guidance on objectives and approaches for explainability and interpretability of ML models and AI systems (ISO/IEC TS 6254:2025).

Q: What does it cover?

A: It identifies stakeholder explainability objectives, describes candidate methods and approaches (local/global, model-specific/model-agnostic, example-based, counterfactuals, visualisations, etc.), and gives guidance on applicability, documentation and evaluation across the AI lifecycle.

Q: Who typically uses it?

A: AI/ML practitioners (developers, validators), product and risk managers, auditors, compliance teams, and regulators who need to specify, assess or document explainability for models and systems.

Q: Is it current or superseded?

A: Current (published September 2025). As a TS it provides guidance; users should track related ISO/IEC JTC 1/SC 42 work for amendments or companion standards.

Q: Is it part of a series?

A: It is part of the ISO/IEC JTC 1/SC 42 family of AI standards and is intended to be used alongside standards for terminology, risk management and AI management systems (for example ISO/IEC 22989, ISO/IEC 23894 and ISO/IEC 42001).

Q: What are the key keywords?

A: Explainability, interpretability, transparency, stakeholder objectives, local/global explanations, surrogate models, counterfactual explanations, documentation, evaluation, AI lifecycle.