Annex A Controls: Use of AI Systems (A.9)
Detailed guidance on implementing Annex A controls for AI system use (A.9), covering intended use, fitness for purpose, and human oversight with 3 controls.
Chapter Overview
This chapter covers the Use of AI Systems domain (A.9), which ensures AI systems are used appropriately and with adequate human oversight. This domain contains 3 controls, including the critical human oversight control.
A.9 Use of AI Systems
Proper use of AI systems is as important as proper development. Even well-designed AI can cause harm if misused or operated without appropriate oversight.
A.9.2 Intended Use
| Attribute | Details |
|---|---|
| Control | The intended use of AI systems shall be defined and documented. |
| Purpose | Establish clear boundaries for AI system use |
| Related Clause | 8.1 (Operational planning and control) |
Implementation Guidance
- Define intended use cases for each AI system
- Document what the AI system should be used for
- Specify what the AI system should NOT be used for
- Identify user groups and their authorized uses
- Document environmental and operational constraints
- Communicate intended use to users
- Monitor for use outside intended scope
Intended Use Documentation
| Element | Description |
|---|---|
| Purpose Statement | The primary purpose of the AI system |
| Use Cases | Specific scenarios where use is appropriate |
| Authorized Users | Who is permitted to use the system |
| Operating Environment | Technical and operational requirements |
| Prohibited Uses | Uses that are explicitly not allowed |
| Limitations | Known constraints on effective use |
| Geographic Scope | Where the system may be used |
AI System: Customer Service Chatbot
Intended Use: Answer common customer questions about products, orders, and returns
Authorized Users: Website visitors, mobile app users
Prohibited Uses:
• Medical, legal, or financial advice
• Processing of sensitive personal data
• Decisions with significant impact on individuals
• Use with vulnerable populations without human oversight
Limitations: May not understand complex queries; escalate to human for complaints
• How do you define intended use for AI systems?
• Show me intended use documentation
• What uses are prohibited?
• How do you communicate intended use to users?
• How do you detect use outside intended scope?
A.9.3 Fitness for Purpose
| Attribute | Details |
|---|---|
| Control | AI systems shall be fit for their intended purpose and perform as expected within defined boundaries. |
| Purpose | Ensure AI systems actually work for their intended use |
| Related Clause | 8.1 (Operational planning and control), A.6.2.9 (Verification and validation) |
Implementation Guidance
- Define performance requirements for intended use
- Validate AI systems against intended use scenarios
- Test in conditions reflecting actual use
- Monitor ongoing fitness for purpose
- Address performance degradation
- Re-validate when changes occur
Fitness Assessment Areas
| Area | Assessment Questions |
|---|---|
| Performance | Does the system meet accuracy/quality requirements? |
| Reliability | Does the system perform consistently? |
| Robustness | Does the system handle edge cases and variations? |
| Scalability | Does the system handle expected volumes? |
| Usability | Can users effectively use the system? |
| Safety | Does the system operate safely in intended environment? |
1. Define Success Criteria: Measurable requirements for intended use
2. Test Design: Create tests reflecting real-world use scenarios
3. Validation Testing: Execute tests with representative data/users
4. Gap Analysis: Compare results against criteria
5. Remediation: Address any fitness gaps
6. Sign-off: Formal approval for intended use
7. Monitoring: Ongoing fitness monitoring in production
• How do you ensure AI systems are fit for purpose?
• What validation have you performed?
• Show me fitness assessment for [AI system]
• How do you monitor ongoing fitness?
• What happens when fitness degrades?
A.9.4 Human Oversight
| Attribute | Details |
|---|---|
| Control | The organization shall define, implement, and document processes for human oversight of AI systems. |
| Purpose | Maintain appropriate human control over AI systems |
| Related Clause | 8.1 (Operational planning and control) |
Human oversight is one of the most important controls in ISO 42001. It ensures humans remain in control of AI systems and can intervene when necessary. This is also a key requirement of the EU AI Act for high-risk AI systems.
Implementation Guidance
- Determine appropriate oversight level for each AI system
- Design oversight mechanisms into AI systems
- Define roles and responsibilities for oversight
- Train personnel on oversight procedures
- Implement monitoring and alerting
- Enable human intervention and override
- Document oversight processes and decisions
Levels of Human Oversight
| Level | Description | When Appropriate |
|---|---|---|
| Human-in-the-Loop | Human approval required for each AI decision | High-risk decisions, early deployment |
| Human-on-the-Loop | Human monitors AI and can intervene | Medium-risk, established systems |
| Human-over-the-Loop | Human oversight of AI design and outcomes | Lower-risk, high-volume operations |
Oversight Mechanisms
| Mechanism | Description |
|---|---|
| Approval Gates | Human approval before AI action takes effect |
| Review Sampling | Human review of sample AI decisions |
| Threshold Alerts | Alerts when AI confidence is low or output unusual |
| Override Capability | Ability to override or reverse AI decisions |
| Kill Switch | Ability to stop AI system operation |
| Audit Trails | Records for post-hoc human review |
| Escalation | Automatic escalation of edge cases to humans |
Document for each AI system:
• Oversight level and rationale
• Oversight roles and responsibilities
• Oversight procedures and triggers
• Intervention capabilities
• Training requirements for oversight personnel
• Monitoring and alerting mechanisms
• Records of oversight activities and interventions
Factors Affecting Oversight Level
| Factor | Higher Oversight Needed | Lower Oversight May Be Acceptable |
|---|---|---|
| Decision Impact | Significant impact on individuals | Low-impact, easily reversible |
| Autonomy | AI acts independently | AI only recommends |
| Maturity | New or changing AI system | Stable, well-validated system |
| Reversibility | Irreversible consequences | Easy to reverse or correct |
| Regulatory | Regulated domain | Unregulated context |
| Vulnerability | Affects vulnerable groups | General population |
• What human oversight do you have for AI systems?
• How do you determine the appropriate oversight level?
• Show me oversight documentation for [AI system]
• How can humans intervene or override AI decisions?
• What training do oversight personnel receive?
• Show me records of human oversight activities
• How do you handle AI decisions that are questioned?
Control Implementation Summary
| Control | Key Evidence | Common Gaps |
|---|---|---|
| A.9.2 Intended Use | Intended use documentation, prohibited use lists | Use boundaries not defined |
| A.9.3 Fitness for Purpose | Validation records, performance monitoring | No validation against intended use |
| A.9.4 Human Oversight | Oversight procedures, intervention records, training | No oversight mechanisms |
1. Intended use must be documented including prohibited uses
2. AI systems must be validated as fit for their intended purpose
3. Human oversight is critical and required for high-risk AI
4. Oversight level should match risk level
5. Override and intervention capabilities are essential
6. Oversight activities should be documented