ISO IEC 42005-2025 PDF

St ISO IEC 42005-2025

Name in English:
St ISO IEC 42005-2025

Name in Russian:
Ст ISO IEC 42005-2025

Description in English:

Original standard ISO IEC 42005-2025 in PDF full version. Additional info + preview on request

Description in Russian:
Оригинальный стандарт ISO IEC 42005-2025 в PDF полная версия. Дополнительная инфо + превью по запросу
Document status:
Active

Format:
Electronic (PDF)

Delivery time (for English version):
1 business day

Delivery time (for Russian version):
365 business days

SKU:
stiso26452

Choose Document Language:
€25

Full title and description

ISO/IEC 42005:2025 — Information technology — Artificial intelligence (AI) — AI system impact assessment. This international standard provides guidance for organisations to plan, conduct and document AI system impact assessments that identify how an AI system and its foreseeable applications may affect individuals, groups and society, supporting transparency, accountability and integration with AI risk and management processes.

Abstract

ISO/IEC 42005:2025 gives practical guidance on when and how to perform AI system impact assessments across the AI system lifecycle (design, development, deployment and post‑market monitoring). It covers assessment scoping, stakeholder considerations, potential social and human impacts (including fairness, privacy, safety and rights), documentation practices and how assessment outcomes can feed into risk management and AI management systems.

General information

  • Status: Published
  • Publication date: 28 May 2025
  • Publisher: ISO/IEC (International Organization for Standardization / International Electrotechnical Commission)
  • ICS / categories: 35.020 (Information technology)
  • Edition / version: Edition 1.0 (2025)
  • Number of pages: 39

Technical committee: ISO/IEC JTC 1/SC 42 (Artificial Intelligence).

Scope

This document provides guidance for organisations developing, providing or using AI systems, regardless of size or sector, on performing AI system impact assessments. It addresses what impacts to consider, recommended stages of the AI lifecycle for assessment, documentation and how to integrate the assessment process with organisational AI risk management and AI management systems. The guidance is non‑prescriptive and intended to be adaptable to different contexts and regulatory environments.

Key topics and requirements

  • Framing and scoping of AI system impact assessments (purpose, affected populations, lifecycle stage).
  • Identification of potential human, social and environmental impacts (fairness, discrimination, safety, privacy, human rights, sustainability).
  • Assessment methods and evidence gathering (data review, stakeholder engagement, scenario analysis, testing and validation).
  • Documentation requirements for transparency and traceability (assessment records, rationales, mitigation measures).
  • Integration with AI risk management and AI management systems — ensuring assessment outcomes inform design, deployment and monitoring decisions.
  • Recommendations for monitoring, review and update of assessments post‑deployment as system and context evolve.
  • Emphasis on stakeholder involvement and communication of assessment results to affected parties and governance bodies.

Typical use and users

Organisations that develop, deploy or procure AI systems; product and program managers; risk and compliance teams; AI governance and ethics committees; auditors and regulators; consultants performing impact or ethical assessments; and anyone implementing or overseeing AI management systems seeking to document and mitigate social and human impacts.

Related standards

ISO/IEC 42005:2025 is complementary to other AI and management standards, notably ISO/IEC 42001 (AI management systems) and ISO/IEC 23894 (AI — guidance on risk management). It also fits within the broader family of AI‑related ISO/IEC work (terminology, governance and technical guidance) produced by JTC 1/SC 42.

Keywords

AI system impact assessment, AI governance, AI lifecycle, impact assessment, risk management, transparency, accountability, fairness, privacy, human rights, AI management system, stakeholder engagement, monitoring.

FAQ

Q: What is this standard?

A: An international guidance standard (ISO/IEC 42005:2025) offering organisations practical advice on planning, conducting and documenting AI system impact assessments to identify social and human impacts of AI systems.

Q: What does it cover?

A: It covers scoping assessments, identifying impacted populations and impact types (e.g., fairness, privacy, safety), methods for evidence gathering, documentation practices, integration with risk and management systems, and guidance on monitoring and updating assessments throughout the AI lifecycle.

Q: Who typically uses it?

A: Developers, providers and users of AI systems; risk/compliance teams; product managers; ethics and governance bodies; auditors and regulators; and consultants doing impact or ethical assessments.

Q: Is it current or superseded?

A: Current — ISO/IEC 42005:2025 was published in 2025 and is an active international standard.

Q: Is it part of a series?

A: It is part of the emerging ISO/IEC family of AI standards developed by JTC 1/SC 42 and is intended to be used alongside standards such as ISO/IEC 42001 (AI management systems) and ISO/IEC 23894 (AI risk management).

Q: What are the key keywords?

A: AI impact assessment, AI governance, AI lifecycle, risk management, transparency, accountability, fairness, privacy, documentation, stakeholder engagement.