Clause 8: Operation
Operational planning and control, AI risk assessment execution, risk treatment implementation, and AI system impact assessment.
Chapter Overview
Clause 8 is where the AIMS becomes operational. This clause covers the execution of risk assessments, implementation of risk treatments, and critically, the AI system impact assessment (8.4) - a unique ISO 42001 requirement not found in other management system standards.
Clause Structure
| Sub-clause | Title | Focus |
|---|---|---|
| 8.1 | Operational planning and control | Planning and controlling AIMS processes |
| 8.2 | AI risk assessment | Conducting risk assessments |
| 8.3 | AI risk treatment | Implementing risk treatment plans |
| 8.4 | AI system impact assessment | Assessing impacts on individuals and society |
8.1 Operational Planning and Control
Requirement
The organization shall plan, implement, and control the processes needed to meet requirements by:
- Establishing criteria for the processes
- Implementing control of the processes in accordance with the criteria
- Keeping documented information to have confidence that processes have been carried out as planned
The organization shall control planned changes and review the consequences of unintended changes, taking action to mitigate any adverse effects.
The organization shall ensure that outsourced processes are controlled.
Operational control means having defined processes with clear criteria, implementing them consistently, and maintaining evidence that they work as intended. This applies to all AI-related activities within your AIMS scope.
Operational Controls for AI
| Process Area | Control Examples |
|---|---|
| AI Development | Development standards, code review, testing requirements |
| Data Management | Data quality checks, provenance tracking, bias testing |
| Model Training | Training protocols, validation procedures, documentation |
| Deployment | Approval gates, deployment checklists, rollback procedures |
| Monitoring | Performance metrics, drift detection, alert thresholds |
| Change Management | Change requests, impact assessment, approval workflow |
| Incident Response | Incident procedures, escalation paths, communication plans |
Outsourced Processes
If you outsource AI-related processes (cloud AI services, model development, data labeling), you must:
- Define requirements for outsourced processes
- Include AI governance requirements in contracts
- Monitor supplier performance
- Retain accountability for outsourced activities
8.2 AI Risk Assessment
Requirement
The organization shall perform AI risk assessments at planned intervals, or when significant changes are proposed or occur, taking account of the criteria established in 6.1.2.
The organization shall retain documented information of the results of the AI risk assessments.
Planned intervals: Annual review of all AI systems
New AI systems: Before deployment approval
Significant changes: Model updates, new data sources, expanded use cases
Incidents: After AI-related incidents or near-misses
External changes: New regulations, technology changes
Risk Assessment Execution
| Step | Activities | Outputs |
|---|---|---|
| 1. Preparation | Define scope, gather information, assemble team | Assessment plan |
| 2. Risk Identification | Identify risks across all categories and lifecycle stages | Risk list |
| 3. Risk Analysis | Assess likelihood and consequence for each risk | Risk ratings |
| 4. Risk Evaluation | Compare against criteria, prioritize risks | Prioritized risk register |
| 5. Documentation | Record results, recommendations, decisions | Risk assessment report |
AI System Lifecycle Risk Considerations
| Lifecycle Stage | Risk Considerations |
|---|---|
| Design | Requirements gaps, ethical considerations, feasibility |
| Data Collection | Bias, privacy, consent, representativeness |
| Development | Model selection, training issues, security vulnerabilities |
| Testing | Inadequate testing, edge cases, adversarial inputs |
| Deployment | Integration issues, user readiness, rollback capability |
| Operation | Performance drift, misuse, unintended consequences |
| Monitoring | Detection gaps, alert fatigue, response delays |
| Retirement | Data retention, knowledge loss, transition risks |
8.3 AI Risk Treatment
Requirement
The organization shall implement the AI risk treatment plan.
The organization shall retain documented information of the results of the AI risk treatment.
This clause requires you to actually implement the controls and actions identified in your risk treatment plan (created under 6.1.3). It's not enough to have a plan - you must execute it and document the results.
Treatment Implementation Process
- Review approved risk treatment plan
- Assign implementation responsibilities
- Allocate resources (budget, personnel, tools)
- Implement controls according to plan
- Document implementation evidence
- Verify control effectiveness
- Update risk register with residual risk levels
- Report to risk owners
Evidence of Risk Treatment
| Treatment Type | Evidence Examples |
|---|---|
| Technical controls | System configurations, test results, monitoring dashboards |
| Process controls | Procedures, work instructions, process records |
| Training | Training records, competence assessments |
| Documentation | Policies, guidelines, user documentation |
| Third-party controls | Contracts, SLAs, audit reports |
8.4 AI System Impact Assessment
Requirement
The organization shall conduct an AI system impact assessment for AI systems, taking into account the potential consequences of the AI system for individuals, groups of individuals, and societies.
The organization shall retain documented information of the results of the AI system impact assessment.
Clause 8.4 is unique to ISO 42001 and not found in ISO 27001 or other management system standards. It requires assessing impacts beyond traditional risk assessment - specifically focusing on how AI systems affect people and society.
Impact Assessment vs Risk Assessment
| Aspect | Risk Assessment (8.2) | Impact Assessment (8.4) |
|---|---|---|
| Focus | Risks to organization and AIMS | Impacts on individuals and society |
| Perspective | Organization-centric | Human-centric |
| Scope | All risk categories | Consequences for people |
| Output | Risk register, treatment plan | Impact assessment report |
Impact Categories
Impacts on Individuals
| Category | Examples |
|---|---|
| Rights & Freedoms | Privacy, freedom of expression, non-discrimination |
| Safety & Health | Physical safety, mental health, wellbeing |
| Economic | Employment, financial decisions, access to services |
| Autonomy | Decision-making ability, informed consent, choice |
| Dignity | Respect, fair treatment, human oversight |
Impacts on Society
| Category | Examples |
|---|---|
| Social | Social cohesion, inequality, discrimination at scale |
| Economic | Labor market disruption, economic concentration |
| Democratic | Misinformation, manipulation, surveillance |
| Environmental | Energy consumption, resource use, e-waste |
| Cultural | Cultural bias, homogenization, loss of diversity |
Impact Assessment Process
- Identify AI System: Define the AI system being assessed
- Describe Functionality: What does the AI system do?
- Identify Affected Parties: Who is affected by the AI system?
- Assess Individual Impacts: Evaluate impacts on individuals
- Assess Societal Impacts: Evaluate broader societal impacts
- Evaluate Severity: Rate the severity of identified impacts
- Identify Mitigations: Define measures to reduce negative impacts
- Document Results: Create impact assessment report
- Review and Approve: Obtain appropriate approvals
- Monitor: Ongoing monitoring of actual impacts
Template: AI System Impact Assessment
1. AI SYSTEM IDENTIFICATION
• System Name:
• System ID:
• Business Owner:
• Assessment Date:
• Assessor(s):
2. SYSTEM DESCRIPTION
• Purpose and objectives:
• AI type (ML, NLP, computer vision, etc.):
• Input data:
• Outputs and decisions:
• Autonomy level:
• Scale of use:
3. AFFECTED PARTIES
• Direct users:
• Subjects of AI decisions:
• Indirectly affected groups:
• Vulnerable groups:
4. INDIVIDUAL IMPACT ASSESSMENT
For each impact category, assess: Description, Likelihood (1-5), Severity (1-5), Affected Groups, Existing Mitigations, Residual Impact Level
5. SOCIETAL IMPACT ASSESSMENT
For each societal category, assess: Description, Scale, Severity, Existing Mitigations, Residual Impact Level
6. MITIGATION MEASURES
• Additional controls required:
• Human oversight measures:
• Transparency measures:
• Monitoring requirements:
7. CONCLUSIONS
• Overall impact rating:
• Recommendation (proceed/proceed with conditions/do not proceed):
• Conditions or requirements:
8. APPROVAL
• Assessed by: [Name, Date]
• Reviewed by: [Name, Date]
• Approved by: [Name, Date]
Documented Information Requirements
Required:
• Operational planning documentation (8.1)
• AI risk assessment results (8.2)
• AI risk treatment results (8.3)
• AI system impact assessment results (8.4)
Recommended:
• Operational procedures for AI activities
• Change management records
• Outsourced process controls
• Impact assessment methodology
Sample Audit Questions
8.1 Operational Planning:
• How do you control AI-related operational processes?
• What criteria have you established for AI processes?
• How do you manage changes to AI systems?
• How do you control outsourced AI processes?
8.2 AI Risk Assessment:
• Show me a completed AI risk assessment
• How often do you conduct risk assessments?
• What triggers a new risk assessment?
• How do you ensure assessments cover the full AI lifecycle?
8.3 AI Risk Treatment:
• Show me evidence that risk treatments have been implemented
• How do you verify that controls are effective?
• What is the current status of your risk treatment plan?
8.4 AI Impact Assessment:
• Show me an AI system impact assessment
• How do you assess impacts on individuals?
• How do you consider societal impacts?
• Who approves impact assessments?
• How do you handle AI systems with significant negative impacts?
Common Nonconformities
| Type | Nonconformity | How to Avoid |
|---|---|---|
| Major | No AI system impact assessments conducted | Implement impact assessment process |
| Major | Risk assessments not covering AI lifecycle | Use lifecycle-based assessment approach |
| Major | No evidence of risk treatment implementation | Document implementation evidence |
| Minor | Impact assessments missing societal considerations | Include societal impact categories |
| Minor | Operational criteria not documented | Document process criteria |
| Minor | Outsourced processes not controlled | Include AI requirements in contracts |
| Minor | Changes not systematically managed | Implement change management process |
1. Operational control requires defined criteria, implementation, and evidence
2. Risk assessments must be conducted at planned intervals and when changes occur
3. Risk treatment plans must be implemented and documented
4. AI system impact assessment (8.4) is unique to ISO 42001
5. Impact assessment focuses on individuals and society, not just organizational risk
6. Outsourced AI processes must be controlled
• Know that 8.4 (AI system impact assessment) is unique to ISO 42001
• Understand the difference between risk assessment and impact assessment
• Remember that risk assessments must cover the entire AI lifecycle
• Know that outsourced processes must be controlled
• Impact assessment considers individuals AND society
• All four sub-clauses require documented information