Clause 6: Planning
AI risk assessment, risk treatment, establishing AI objectives, and planning actions to address risks and opportunities.
Chapter Overview
Clause 6 is one of the most critical clauses in ISO 42001, covering risk-based planning. It includes AI-specific requirements for risk assessment (6.1.2) and risk treatment (6.1.3) that go beyond standard Annex SL requirements. Mastering this clause is essential for both the exam and implementation.
Clause Structure
| Sub-clause | Title | Focus |
|---|---|---|
| 6.1 | Actions to address risks and opportunities | Risk-based planning |
| 6.1.1 | General | Determining risks and opportunities |
| 6.1.2 | AI risk assessment | AI-specific risk assessment process |
| 6.1.3 | AI risk treatment | AI-specific risk treatment process |
| 6.1.4 | Planning actions | Integrating actions into AIMS |
| 6.2 | AI objectives and planning to achieve them | Setting and achieving objectives |
6.1.1 General - Risks and Opportunities
Requirement
When planning for the AIMS, the organization shall consider the issues referred to in 4.1 and the requirements referred to in 4.2 and determine the risks and opportunities that need to be addressed to:
- Ensure the AIMS can achieve its intended outcomes
- Prevent or reduce undesired effects
- Achieve continual improvement
AIMS Risks: Risks to the management system itself (e.g., lack of resources, poor implementation)
AI System Risks: Risks from AI systems (e.g., bias, errors, security vulnerabilities, societal harm)
Clause 6.1 addresses both types. The AI-specific risks are detailed in 6.1.2 and 6.1.3.
6.1.2 AI Risk Assessment
Requirement
The organization shall define and apply an AI risk assessment process that:
- Establishes and maintains AI risk criteria
- Ensures repeated AI risk assessments produce consistent, valid, and comparable results
- Identifies AI risks associated with the loss of confidentiality, integrity, and availability, and other relevant AI risks
- Identifies risks associated with the development, provision, or use of AI systems throughout their lifecycle
- Analyzes AI risks by assessing potential consequences and realistic likelihood
- Evaluates AI risks by comparing results against established criteria and prioritizing risks for treatment
Technical Risks: Model performance, robustness, security, reliability
Data Risks: Bias, quality, privacy, provenance, representativeness
Operational Risks: Availability, scalability, maintainability, integration
Ethical Risks: Fairness, transparency, human agency, accountability
Legal/Compliance Risks: Regulatory violations, liability, contractual
Societal Risks: Employment impact, environmental, social inequality
Reputational Risks: Public perception, stakeholder trust, brand damage
Risk Assessment Process
| Step | Activity | Output |
|---|---|---|
| 1 | Establish context | Risk criteria, scope, boundaries |
| 2 | Risk identification | Risk register with identified risks |
| 3 | Risk analysis | Likelihood and consequence ratings |
| 4 | Risk evaluation | Prioritized risks, treatment decisions |
| 5 | Documentation | Risk assessment report |
Risk Criteria
Establish criteria for:
| Criteria Type | Description | Example Scale |
|---|---|---|
| Likelihood | Probability of risk occurring | 1-5 (Rare to Almost Certain) |
| Consequence | Impact if risk materializes | 1-5 (Negligible to Catastrophic) |
| Risk Level | Combined likelihood × consequence | Low, Medium, High, Critical |
| Risk Appetite | Acceptable risk threshold | Accept risks below score of 8 |
AI-Specific Risk Factors
| Factor | Risk Considerations |
|---|---|
| Autonomy Level | Higher autonomy = higher risk potential |
| Decision Impact | Decisions affecting rights, safety, finances |
| Reversibility | Can AI decisions be undone? |
| Transparency | Can decisions be explained? |
| Data Sensitivity | Personal, confidential, or sensitive data |
| Scale | Number of people/transactions affected |
| Vulnerability | Susceptibility to attacks or manipulation |
Template: AI Risk Register
Columns:
• Risk ID (unique identifier)
• AI System (which system)
• Risk Category (technical, data, ethical, etc.)
• Risk Description (what could happen)
• Risk Source (cause/trigger)
• Affected Parties (who is impacted)
• Likelihood (1-5)
• Consequence (1-5)
• Risk Score (L × C)
• Risk Level (Low/Medium/High/Critical)
• Existing Controls (current mitigation)
• Treatment Decision (accept/treat/transfer/avoid)
• Treatment Actions (if treating)
• Risk Owner (accountable person)
• Review Date (next review)
6.1.3 AI Risk Treatment
Requirement
The organization shall define and apply an AI risk treatment process to:
- Select appropriate AI risk treatment options, taking account of the risk assessment results
- Determine all controls necessary to implement the AI risk treatment option(s) chosen
- Compare controls with Annex A to verify no necessary controls are omitted
- Produce a Statement of Applicability (SoA)
- Formulate an AI risk treatment plan
- Obtain risk owners' approval of the AI risk treatment plan and acceptance of residual AI risks
Avoid: Eliminate the risk by not proceeding with the AI activity
Modify/Mitigate: Implement controls to reduce likelihood or consequence
Transfer: Share risk with third party (insurance, outsourcing)
Accept: Acknowledge and monitor without additional treatment
Statement of Applicability (SoA)
The SoA is a mandatory document that:
- Lists all 39 Annex A controls
- States whether each control is applicable or not
- Provides justification for exclusions
- Indicates implementation status
- References how controls are implemented
For each control:
• Control reference (A.2.2, A.3.2, etc.)
• Control name
• Applicable? (Yes/No)
• Justification if not applicable
• Implementation status (Implemented/Partial/Planned/Not Implemented)
• Implementation reference (document, process, tool)
Risk Treatment Plan
For risks requiring treatment, document:
| Element | Description |
|---|---|
| Risk Reference | Link to risk register |
| Treatment Actions | Specific actions to implement |
| Controls | Annex A controls being implemented |
| Resources | Budget, personnel, tools needed |
| Responsibility | Who is accountable |
| Timeline | Target completion date |
| Success Criteria | How effectiveness will be measured |
| Residual Risk | Expected risk level after treatment |
6.1.4 Planning Actions
Requirement
The organization shall plan:
- Actions to address risks and opportunities
- How to integrate and implement actions into AIMS processes
- How to evaluate the effectiveness of these actions
6.2 AI Objectives and Planning to Achieve Them
Requirement
The organization shall establish AI objectives at relevant functions, levels, and processes. The AI objectives shall:
- Be consistent with the AI policy
- Be measurable (if practicable)
- Take into account applicable requirements
- Be monitored
- Be communicated
- Be updated as appropriate
When planning how to achieve AI objectives, the organization shall determine:
- What will be done
- What resources will be required
- Who will be responsible
- When it will be completed
- How the results will be evaluated
Specific - Clear and well-defined
Measurable - Quantifiable metrics
Achievable - Realistic and attainable
Relevant - Aligned with AI policy
Time-bound - Clear deadline
Example AI Objectives
| Objective | Metric | Target | Timeline |
|---|---|---|---|
| Complete AI risk assessments | % of AI systems assessed | 100% | Q2 2025 |
| Implement human oversight | % of high-risk AI with oversight | 100% | Q3 2025 |
| AI incident response | Average response time | <4 hours | Ongoing |
| Staff training | % staff trained on AI policy | 95% | Q4 2025 |
| Reduce AI bias incidents | Bias-related complaints | 50% reduction | 12 months |
Documented Information Requirements
Required:
• AI Risk Assessment Process (6.1.2)
• AI Risk Treatment Process (6.1.3)
• Statement of Applicability (6.1.3)
• AI Risk Treatment Plan (6.1.3)
Recommended:
• Risk Criteria Document
• AI Risk Register
• AI Objectives Register
Sample Audit Questions
6.1.2 AI Risk Assessment:
• Show me your AI risk assessment methodology
• What risk criteria do you use?
• How do you ensure consistent risk assessments?
• Walk me through a risk assessment for one of your AI systems
• What AI-specific risks have you identified?
• How do you consider the entire AI lifecycle in risk assessment?
6.1.3 AI Risk Treatment:
• Show me your Statement of Applicability
• How did you determine which Annex A controls are applicable?
• Justify why control X is marked as not applicable
• Show me your risk treatment plan
• How do risk owners approve treatment plans?
• What residual risks have been accepted and by whom?
6.2 AI Objectives:
• What are your AI objectives?
• How do objectives link to your AI policy?
• How do you measure progress against objectives?
• Who is responsible for each objective?
Common Nonconformities
| Type | Nonconformity | How to Avoid |
|---|---|---|
| Major | No documented risk assessment process | Document methodology with criteria |
| Major | No Statement of Applicability | Create SoA covering all 39 controls |
| Major | SoA missing justification for exclusions | Document rationale for each exclusion |
| Major | Risk treatment plan not approved by risk owners | Obtain formal approvals |
| Minor | Risk assessments not covering AI lifecycle | Assess risks at each lifecycle stage |
| Minor | Objectives not measurable | Define metrics for each objective |
| Minor | Risk criteria not documented | Document likelihood/consequence scales |
1. Clause 6 contains AI-specific requirements (6.1.2, 6.1.3) beyond standard Annex SL
2. Risk assessment must cover the entire AI system lifecycle
3. Statement of Applicability is mandatory and must justify exclusions
4. Risk treatment plans require risk owner approval
5. AI objectives should be SMART and linked to policy
6. Compare controls against Annex A to ensure completeness
• Know that 6.1.2 and 6.1.3 are AI-specific extensions to Annex SL
• Remember SoA is mandatory and must cover all 39 Annex A controls
• Understand the four risk treatment options (avoid, modify, transfer, accept)
• Know that risk owners must approve treatment plans and accept residual risk
• Be able to explain AI-specific risk categories
• Remember objectives must be consistent with AI policy