ISO IEC TR 24027-2021 PDF

St ISO IEC TR 24027-2021

Name in English:
St ISO IEC TR 24027-2021

Name in Russian:
Ст ISO IEC TR 24027-2021

Description in English:

Original standard ISO IEC TR 24027-2021 in PDF full version. Additional info + preview on request

Description in Russian:
Оригинальный стандарт ISO IEC TR 24027-2021 в PDF полная версия. Дополнительная инфо + превью по запросу
Document status:
Active

Format:
Electronic (PDF)

Delivery time (for English version):
1 business day

Delivery time (for Russian version):
365 business days

SKU:
stiso27530

Choose Document Language:
€25

Full title and description

Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making. This technical report explains sources and manifestations of bias in AI systems and describes measurement techniques and assessment methods to identify, evaluate and help treat bias-related vulnerabilities across the AI system lifecycle.

Abstract

The report addresses bias in relation to AI systems with particular attention to AI‑aided decision‑making. It describes types and sources of unwanted bias, outlines metrics and measurement approaches, and provides guidance on assessment and mitigation across lifecycle phases such as data collection, training, continual learning, design, testing, evaluation and operational use. The aim is to support fairer, more transparent and trustworthy AI outcomes.

General information

  • Status: Published.
  • Publication date: 2021-11-05 (Edition 1).
  • Publisher: International Organization for Standardization (ISO) / IEC.
  • ICS / categories: 35.020 (Information technology in general).
  • Edition / version: Edition 1.0 (2021).
  • Number of pages: 39 pages (technical report).

Scope

This technical report scopes bias in AI systems and AI‑aided decision making and covers identification, measurement and assessment methods for bias. It is applicable to all phases of the AI system lifecycle (requirements, data collection and preparation, model development and training, validation and testing, deployment, operation and continual learning) and is intended to inform designers, developers, evaluators and users of AI systems about bias‑related vulnerabilities and mitigation options.

Key topics and requirements

  • Definitions and taxonomy of bias and fairness in AI (forms such as data bias, representation bias, automation bias, stereotyping and unfair quality of service).
  • Sources of unwanted bias: human cognitive biases, societal and historical biases, dataset collection and labelling processes, requirement/specification bias and system design choices.
  • Measurement techniques and metrics for detecting and quantifying bias and disparate impacts across groups.
  • Assessment workflows and recommended evaluation methods across lifecycle stages (planning, testing, statistical and empirical assessment, continuous monitoring).
  • Guidance on mitigation approaches and practical considerations for treating bias-related vulnerabilities (data strategies, model choices, evaluation design and human oversight).

Typical use and users

Primary users include AI system designers and engineers, data scientists, validation and test teams, product managers, compliance and ethics officers, procurement officers evaluating AI suppliers, and regulators or auditors assessing AI systems for fairness and non‑discrimination. The report is used to inform bias risk assessments, to design evaluation plans and to support organizational guidance on fair and trustworthy AI.

Related standards

This technical report is part of the ISO/IEC JTC 1/SC 42 AI work programme and complements other AI deliverables such as ISO/IEC TR 24028 (Overview of trustworthiness in AI), ISO/IEC TR 24029‑1 (Assessment of the robustness of neural networks — Part 1: overview), ISO/IEC 23894 (Guidance on AI risk management), ISO/IEC 42001 (AI management systems) and companion AI technical reports on use cases and evaluation. These deliverables form an ecosystem addressing trustworthiness, risk management, robustness, governance and lifecycle controls for AI.

Keywords

AI bias; fairness; measurement; assessment; mitigation; AI lifecycle; evaluation; transparency; trustworthiness; AI‑aided decision making.

FAQ

Q: What is this standard?

A: ISO/IEC TR 24027:2021 is a technical report titled "Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making" that explains bias types, sources, measurement techniques and assessment methods to address bias in AI systems.

Q: What does it cover?

A: It covers identification and characterization of bias, measurement and evaluation approaches, lifecycle‑stage guidance (data, training, testing, deployment and monitoring), and practical mitigation considerations for AI systems used in decision‑making contexts.

Q: Who typically uses it?

A: AI developers, data scientists, test and validation teams, ethics/compliance officers, procurement and risk teams, and regulators or auditors who need a cross‑disciplinary reference for assessing and treating bias in AI.

Q: Is it current or superseded?

A: ISO/IEC TR 24027:2021 was published in November 2021 (Edition 1) and is published/current as a technical report; national and regional identical/adopted versions exist (for example identifiers and adoptions in 2022–2024). Users should check national bodies or ISO for any later revisions or related updates.

Q: Is it part of a series?

A: Yes — it belongs to the ISO/IEC AI technical reports and standards family produced by ISO/IEC JTC 1/SC 42 that address trustworthiness, robustness, risk management, AI management systems and use cases (e.g., TR 24028, TR 24029 series, ISO/IEC 23894, ISO/IEC 42001 and TR 24030).

Q: What are the key keywords?

A: Bias, fairness, measurement, assessment, mitigation, AI lifecycle, transparency, trustworthiness, AI‑aided decision making.