Chapter 22

AI Risk Assessment Methodology Guide

Complete methodology for conducting AI risk assessments including criteria definition, risk identification, analysis, evaluation, and documentation.

25 min read

Chapter Overview

This chapter provides a complete methodology for conducting AI risk assessments as required by Clause 6.1.2 and Clause 8.2. A robust risk assessment process is fundamental to effective AI governance.

Clause 6.1.2 Requirements

The organization shall define and apply an AI risk assessment process that:
• Establishes and maintains AI risk criteria
• Ensures repeated assessments produce consistent, valid, comparable results
• Identifies AI risks (confidentiality, integrity, availability, and other AI risks)
• Identifies risks throughout the AI system lifecycle
• Analyzes risks (likelihood and consequence)
• Evaluates risks against criteria and prioritizes for treatment

Risk Assessment Process Overview

PhaseActivitiesOutputs
1. Establish ContextDefine scope, criteria, methodologyRisk criteria document
2. Risk IdentificationIdentify risks across all categoriesRisk list
3. Risk AnalysisAssess likelihood and consequenceAnalyzed risks
4. Risk EvaluationCompare against criteria, prioritizePrioritized risk register
5. DocumentationDocument assessment and resultsRisk assessment report

Phase 1: Establish Context

1.1 Define Scope

Clearly define what is being assessed:

  • Which AI system(s)
  • Which lifecycle stages
  • Which business processes
  • Which locations/environments
  • Assessment boundaries

1.2 Define Risk Criteria

Likelihood Scale

LevelRatingDescriptionFrequency Guide
Rare1Very unlikely to occurLess than once per 5 years
Unlikely2Could occur but not expectedOnce per 2-5 years
Possible3Might occurOnce per 1-2 years
Likely4Will probably occurOnce per year
Almost Certain5Expected to occurMultiple times per year

Consequence Scale

LevelRatingFinancialOperationalReputationalIndividual Impact
Negligible1<£10KMinor disruptionNo external awarenessNo noticeable impact
Minor2£10K-100KSome disruptionLocal awarenessMinor inconvenience
Moderate3£100K-1MSignificant disruptionRegional/industry awarenessSignificant negative impact
Major4£1M-10MMajor disruptionNational awarenessSerious harm
Catastrophic5>£10MBusiness threateningInternational awarenessSevere/irreversible harm

Risk Level Matrix

Description
Likelihood / Consequence1-Negligible2-Minor3-Moderate4-Major5-Catastrophic
5 - Almost CertainMedium (5)Medium (10)High (15)Critical (20)Critical (25)
4 - LikelyLow (4)Medium (8)High (12)High (16)Critical (20)
3 - PossibleLow (3)Medium (6)Medium (9)High (12)High (15)
2 - UnlikelyLow (2)Low (4)Medium (6)Medium (8)Medium (10)
1 - RareLow (1)Low (2)Low (3)Low (4)Medium (5)

Risk Level Definitions

LevelScore RangeResponse Required
Critical20-25Immediate action required; escalate to senior management
High12-16Urgent treatment required; management attention needed
Medium5-10Treatment required; plan and implement controls
Low1-4Accept or treat as resources allow; monitor

1.3 Define Risk Appetite

Risk Appetite Statement Example

"The organization has low appetite for AI risks that could:
• Cause significant harm to individuals
• Result in regulatory non-compliance
• Damage organizational reputation

The organization accepts moderate risk for AI initiatives that:
• Have potential significant business benefit
• Can be monitored and controlled
• Are reversible if issues occur"

Phase 2: Risk Identification

2.1 Risk Identification Methods

MethodDescriptionBest For
Checklist ReviewUse Annex C risk sources as checklistComprehensive coverage
BrainstormingTeam sessions to identify risksCreative identification
InterviewsDiscuss risks with stakeholdersExpert knowledge capture
Scenario Analysis"What if" scenariosComplex risk chains
Historical AnalysisReview past incidentsKnown risk patterns
FMEAFailure Mode and Effects AnalysisTechnical systems

2.2 AI Risk Categories

Ensure coverage across all categories (reference Annex C):

  • Data Risks: Quality, bias, privacy, provenance
  • Model Risks: Accuracy, robustness, explainability, drift
  • Technical Risks: Security, availability, integration
  • Human Risks: Misuse, over-reliance, skill gaps
  • Organizational Risks: Governance, resources, communication
  • External Risks: Regulatory, threat actors, technology change
  • Ethical Risks: Fairness, transparency, human rights
  • Impact Risks: Individual harm, societal harm

2.3 Lifecycle Coverage

Identify risks at each lifecycle stage:

StageExample Risks
DesignUnclear requirements, ethical issues not identified
Data CollectionBiased data, privacy violations, insufficient data
DevelopmentModel errors, security vulnerabilities, poor documentation
TestingInadequate testing, missed edge cases
DeploymentIntegration failures, user readiness gaps
OperationMisuse, performance issues, incidents
MonitoringDrift undetected, alert fatigue
RetirementData retention issues, knowledge loss

Phase 3: Risk Analysis

3.1 Assess Likelihood

For each identified risk, assess likelihood considering:

  • Historical occurrence
  • Current control effectiveness
  • Threat landscape
  • Vulnerability exposure
  • Environmental factors

3.2 Assess Consequence

For each identified risk, assess consequence considering:

  • Financial impact
  • Operational impact
  • Reputational impact
  • Regulatory/legal impact
  • Impact on individuals
  • Societal impact

3.3 Calculate Risk Score

Risk Score = Likelihood × Consequence

Document both inherent risk (without controls) and residual risk (with existing controls).

Phase 4: Risk Evaluation

4.1 Compare Against Criteria

  • Compare each risk score against risk level matrix
  • Identify risks exceeding risk appetite
  • Flag risks requiring immediate attention

4.2 Prioritize Risks

Prioritize based on:

  • Risk level (Critical → High → Medium → Low)
  • Treatment urgency
  • Regulatory requirements
  • Stakeholder concerns
  • Resource availability

4.3 Treatment Decisions

Risk LevelTypical Decision
CriticalTreat immediately or avoid activity
HighTreat with priority
MediumTreat according to plan
LowAccept or treat if efficient

Phase 5: Documentation

AI Risk Register Template

Risk Register Fields

Identification:
• Risk ID (unique identifier)
• AI System (which system)
• Risk Category (data, model, technical, etc.)
• Lifecycle Stage (design, operation, etc.)
• Risk Description (what could happen)
• Risk Source/Cause (why it might happen)
• Affected Parties (who is impacted)

Analysis:
• Likelihood Rating (1-5)
• Consequence Rating (1-5)
• Inherent Risk Score (L × C)
• Existing Controls
• Residual Likelihood
• Residual Consequence
• Residual Risk Score
• Risk Level (Critical/High/Medium/Low)

Treatment:
• Treatment Decision (Accept/Treat/Transfer/Avoid)
• Treatment Actions
• Target Risk Level
• Risk Owner
• Due Date

Monitoring:
• Review Date
• Status
• Notes

Risk Assessment Report

Report Structure

1. Executive Summary
• Scope and objectives
• Key findings
• Critical/high risks summary
• Recommendations

2. Methodology
• Assessment approach
• Risk criteria used
• Team and stakeholders

3. Context
• AI systems assessed
• Scope and boundaries
• Assumptions and limitations

4. Risk Assessment Results
• Summary by category
• Risk register (detailed)
• Risk heat map

5. Conclusions and Recommendations
• Overall risk posture
• Priority treatment areas
• Next steps

6. Appendices
• Detailed risk register
• Risk criteria definitions
• Supporting evidence

Assessment Triggers

When to Conduct Risk Assessment

Planned:
• Annual comprehensive review
• Before new AI system deployment

Event-Triggered:
• Significant AI system changes
• New AI system development
• AI incidents or near-misses
• Regulatory changes
• Organizational changes
• New risk information
• After corrective actions

Key Takeaways - Risk Assessment

1. Define clear risk criteria before assessment
2. Cover all risk categories and lifecycle stages
3. Use consistent methodology for comparable results
4. Assess both likelihood and consequence
5. Document everything - it's mandatory
6. Conduct assessments at planned intervals and when triggered
7. Link risk assessment to risk treatment and SoA

AI Assistant
00:00