Chapter 6

Clause 6: Planning

AI risk assessment, risk treatment, establishing AI objectives, and planning actions to address risks and opportunities.

25 min read

Chapter Overview

Clause 6 is one of the most critical clauses in ISO 42001, covering risk-based planning. It includes AI-specific requirements for risk assessment (6.1.2) and risk treatment (6.1.3) that go beyond standard Annex SL requirements. Mastering this clause is essential for both the exam and implementation.

Clause Structure

Sub-clauseTitleFocus
6.1Actions to address risks and opportunitiesRisk-based planning
6.1.1GeneralDetermining risks and opportunities
6.1.2AI risk assessmentAI-specific risk assessment process
6.1.3AI risk treatmentAI-specific risk treatment process
6.1.4Planning actionsIntegrating actions into AIMS
6.2AI objectives and planning to achieve themSetting and achieving objectives

6.1.1 General - Risks and Opportunities

Requirement

When planning for the AIMS, the organization shall consider the issues referred to in 4.1 and the requirements referred to in 4.2 and determine the risks and opportunities that need to be addressed to:

  • Ensure the AIMS can achieve its intended outcomes
  • Prevent or reduce undesired effects
  • Achieve continual improvement
Two Types of Risk

AIMS Risks: Risks to the management system itself (e.g., lack of resources, poor implementation)
AI System Risks: Risks from AI systems (e.g., bias, errors, security vulnerabilities, societal harm)

Clause 6.1 addresses both types. The AI-specific risks are detailed in 6.1.2 and 6.1.3.

6.1.2 AI Risk Assessment

Requirement

The organization shall define and apply an AI risk assessment process that:

  • Establishes and maintains AI risk criteria
  • Ensures repeated AI risk assessments produce consistent, valid, and comparable results
  • Identifies AI risks associated with the loss of confidentiality, integrity, and availability, and other relevant AI risks
  • Identifies risks associated with the development, provision, or use of AI systems throughout their lifecycle
  • Analyzes AI risks by assessing potential consequences and realistic likelihood
  • Evaluates AI risks by comparing results against established criteria and prioritizing risks for treatment
AI Risk Categories (from Annex C)

Technical Risks: Model performance, robustness, security, reliability
Data Risks: Bias, quality, privacy, provenance, representativeness
Operational Risks: Availability, scalability, maintainability, integration
Ethical Risks: Fairness, transparency, human agency, accountability
Legal/Compliance Risks: Regulatory violations, liability, contractual
Societal Risks: Employment impact, environmental, social inequality
Reputational Risks: Public perception, stakeholder trust, brand damage

Risk Assessment Process

StepActivityOutput
1Establish contextRisk criteria, scope, boundaries
2Risk identificationRisk register with identified risks
3Risk analysisLikelihood and consequence ratings
4Risk evaluationPrioritized risks, treatment decisions
5DocumentationRisk assessment report

Risk Criteria

Description

Establish criteria for:

Criteria TypeDescriptionExample Scale
LikelihoodProbability of risk occurring1-5 (Rare to Almost Certain)
ConsequenceImpact if risk materializes1-5 (Negligible to Catastrophic)
Risk LevelCombined likelihood × consequenceLow, Medium, High, Critical
Risk AppetiteAcceptable risk thresholdAccept risks below score of 8

AI-Specific Risk Factors

FactorRisk Considerations
Autonomy LevelHigher autonomy = higher risk potential
Decision ImpactDecisions affecting rights, safety, finances
ReversibilityCan AI decisions be undone?
TransparencyCan decisions be explained?
Data SensitivityPersonal, confidential, or sensitive data
ScaleNumber of people/transactions affected
VulnerabilitySusceptibility to attacks or manipulation

Template: AI Risk Register

AI Risk Register Template

Columns:
• Risk ID (unique identifier)
• AI System (which system)
• Risk Category (technical, data, ethical, etc.)
• Risk Description (what could happen)
• Risk Source (cause/trigger)
• Affected Parties (who is impacted)
• Likelihood (1-5)
• Consequence (1-5)
• Risk Score (L × C)
• Risk Level (Low/Medium/High/Critical)
• Existing Controls (current mitigation)
• Treatment Decision (accept/treat/transfer/avoid)
• Treatment Actions (if treating)
• Risk Owner (accountable person)
• Review Date (next review)

6.1.3 AI Risk Treatment

Requirement

The organization shall define and apply an AI risk treatment process to:

  • Select appropriate AI risk treatment options, taking account of the risk assessment results
  • Determine all controls necessary to implement the AI risk treatment option(s) chosen
  • Compare controls with Annex A to verify no necessary controls are omitted
  • Produce a Statement of Applicability (SoA)
  • Formulate an AI risk treatment plan
  • Obtain risk owners' approval of the AI risk treatment plan and acceptance of residual AI risks
Risk Treatment Options

Avoid: Eliminate the risk by not proceeding with the AI activity
Modify/Mitigate: Implement controls to reduce likelihood or consequence
Transfer: Share risk with third party (insurance, outsourcing)
Accept: Acknowledge and monitor without additional treatment

Statement of Applicability (SoA)

The SoA is a mandatory document that:

  • Lists all 39 Annex A controls
  • States whether each control is applicable or not
  • Provides justification for exclusions
  • Indicates implementation status
  • References how controls are implemented
SoA Content Requirements

For each control:
• Control reference (A.2.2, A.3.2, etc.)
• Control name
• Applicable? (Yes/No)
• Justification if not applicable
• Implementation status (Implemented/Partial/Planned/Not Implemented)
• Implementation reference (document, process, tool)

Risk Treatment Plan

For risks requiring treatment, document:

ElementDescription
Risk ReferenceLink to risk register
Treatment ActionsSpecific actions to implement
ControlsAnnex A controls being implemented
ResourcesBudget, personnel, tools needed
ResponsibilityWho is accountable
TimelineTarget completion date
Success CriteriaHow effectiveness will be measured
Residual RiskExpected risk level after treatment

6.1.4 Planning Actions

Requirement

The organization shall plan:

  • Actions to address risks and opportunities
  • How to integrate and implement actions into AIMS processes
  • How to evaluate the effectiveness of these actions

6.2 AI Objectives and Planning to Achieve Them

Requirement

The organization shall establish AI objectives at relevant functions, levels, and processes. The AI objectives shall:

  • Be consistent with the AI policy
  • Be measurable (if practicable)
  • Take into account applicable requirements
  • Be monitored
  • Be communicated
  • Be updated as appropriate

When planning how to achieve AI objectives, the organization shall determine:

  • What will be done
  • What resources will be required
  • Who will be responsible
  • When it will be completed
  • How the results will be evaluated
SMART Objectives

Specific - Clear and well-defined
Measurable - Quantifiable metrics
Achievable - Realistic and attainable
Relevant - Aligned with AI policy
Time-bound - Clear deadline

Example AI Objectives

ObjectiveMetricTargetTimeline
Complete AI risk assessments% of AI systems assessed100%Q2 2025
Implement human oversight% of high-risk AI with oversight100%Q3 2025
AI incident responseAverage response time<4 hoursOngoing
Staff training% staff trained on AI policy95%Q4 2025
Reduce AI bias incidentsBias-related complaints50% reduction12 months

Documented Information Requirements

Mandatory Documents - Clause 6

Required:
• AI Risk Assessment Process (6.1.2)
• AI Risk Treatment Process (6.1.3)
• Statement of Applicability (6.1.3)
• AI Risk Treatment Plan (6.1.3)

Recommended:
• Risk Criteria Document
• AI Risk Register
• AI Objectives Register

Sample Audit Questions

Auditor Questions - Clause 6

6.1.2 AI Risk Assessment:
• Show me your AI risk assessment methodology
• What risk criteria do you use?
• How do you ensure consistent risk assessments?
• Walk me through a risk assessment for one of your AI systems
• What AI-specific risks have you identified?
• How do you consider the entire AI lifecycle in risk assessment?

6.1.3 AI Risk Treatment:
• Show me your Statement of Applicability
• How did you determine which Annex A controls are applicable?
• Justify why control X is marked as not applicable
• Show me your risk treatment plan
• How do risk owners approve treatment plans?
• What residual risks have been accepted and by whom?

6.2 AI Objectives:
• What are your AI objectives?
• How do objectives link to your AI policy?
• How do you measure progress against objectives?
• Who is responsible for each objective?

Common Nonconformities

TypeNonconformityHow to Avoid
MajorNo documented risk assessment processDocument methodology with criteria
MajorNo Statement of ApplicabilityCreate SoA covering all 39 controls
MajorSoA missing justification for exclusionsDocument rationale for each exclusion
MajorRisk treatment plan not approved by risk ownersObtain formal approvals
MinorRisk assessments not covering AI lifecycleAssess risks at each lifecycle stage
MinorObjectives not measurableDefine metrics for each objective
MinorRisk criteria not documentedDocument likelihood/consequence scales
Key Takeaways - Clause 6

1. Clause 6 contains AI-specific requirements (6.1.2, 6.1.3) beyond standard Annex SL
2. Risk assessment must cover the entire AI system lifecycle
3. Statement of Applicability is mandatory and must justify exclusions
4. Risk treatment plans require risk owner approval
5. AI objectives should be SMART and linked to policy
6. Compare controls against Annex A to ensure completeness

Exam Tips - Clause 6

• Know that 6.1.2 and 6.1.3 are AI-specific extensions to Annex SL
• Remember SoA is mandatory and must cover all 39 Annex A controls
• Understand the four risk treatment options (avoid, modify, transfer, accept)
• Know that risk owners must approve treatment plans and accept residual risk
• Be able to explain AI-specific risk categories
• Remember objectives must be consistent with AI policy

AI Assistant
00:00