Chapter Overview
Annex C is an informative annex (not mandatory) that provides guidance on potential AI objectives and risk sources. It helps organizations identify what they want to achieve with AI and what could go wrong.
Annex C Purpose
Annex C provides:
• Potential objectives organizations may have for AI systems
• Risk sources to consider in AI risk assessments
Use Annex C as a reference when:
• Setting AI objectives (Clause 6.2)
• Conducting AI risk assessments (Clause 6.1.2, 8.2)
• Performing impact assessments (Clause 8.4)
C.1 AI System Objectives
Organizations develop, provide, or use AI systems to achieve various objectives. Annex C lists potential objectives to consider.
Categories of AI Objectives
| Category | Objective Examples |
|---|
| Performance | Accuracy, reliability, efficiency, scalability |
| Safety | Safe operation, harm prevention, fail-safe behavior |
| Security | Confidentiality, integrity, availability, resilience |
| Privacy | Data protection, consent management, anonymization |
| Fairness | Non-discrimination, equitable outcomes, bias prevention |
| Transparency | Explainability, understandability, disclosure |
| Accountability | Clear responsibility, auditability, traceability |
| Human Oversight | Human control, intervention capability, override |
| Robustness | Resilience, error handling, adversarial resistance |
| Compliance | Legal compliance, regulatory adherence, standards |
Detailed AI Objectives
Performance Objectives
| Objective | Description |
|---|
| Accuracy | AI outputs are correct and reliable |
| Precision | AI produces consistent, repeatable results |
| Efficiency | AI operates with optimal resource use |
| Availability | AI systems are accessible when needed |
| Scalability | AI handles increasing workloads |
| Responsiveness | AI provides timely outputs |
Safety Objectives
| Objective | Description |
|---|
| Harm Prevention | AI does not cause physical or psychological harm |
| Fail-Safe Operation | AI fails in a safe manner |
| Predictable Behavior | AI behaves as expected |
| Bounded Operation | AI operates within defined limits |
Ethical Objectives
| Objective | Description |
|---|
| Fairness | AI treats all groups equitably |
| Non-Discrimination | AI does not discriminate based on protected characteristics |
| Human Dignity | AI respects human dignity and rights |
| Beneficence | AI provides benefit to users and society |
| Autonomy | AI supports human decision-making autonomy |
Using AI Objectives
When setting objectives (Clause 6.2):
1. Review Annex C objective categories
2. Identify objectives relevant to your AI systems
3. Prioritize based on context and risk
4. Define measurable targets where possible
5. Align with organizational values and policy
6. Document selected objectives and rationale
C.2 Risk Sources
Annex C identifies potential sources of AI risk to consider during risk assessment.
Risk Source Categories
| Category | Description |
|---|
| Data-Related | Risks arising from data used in AI systems |
| Model-Related | Risks from AI model design and behavior |
| Technology-Related | Risks from technical infrastructure and tools |
| Human-Related | Risks from human interaction with AI |
| Organizational | Risks from organizational factors |
| External | Risks from external environment |
Data-Related Risk Sources
| Risk Source | Description | Example Risks |
|---|
| Data Quality | Issues with data accuracy, completeness, timeliness | Incorrect predictions, unreliable outputs |
| Data Bias | Systematic bias in training data | Discriminatory outcomes, unfair decisions |
| Data Representativeness | Data not representing target population | Poor performance for underrepresented groups |
| Data Privacy | Personal data exposure risks | Privacy violations, regulatory non-compliance |
| Data Provenance | Unknown or unreliable data sources | Unverifiable data, licensing issues |
| Data Poisoning | Malicious manipulation of training data | Compromised model behavior |
| Data Drift | Changes in data distribution over time | Model performance degradation |
Model-Related Risk Sources
| Risk Source | Description | Example Risks |
|---|
| Model Accuracy | Model does not meet performance requirements | Incorrect decisions, failed objectives |
| Model Robustness | Model sensitive to input variations | Inconsistent behavior, exploitation |
| Model Interpretability | Model decisions cannot be explained | Lack of trust, compliance issues |
| Model Bias | Model exhibits unfair behavior | Discrimination, reputational damage |
| Adversarial Vulnerability | Model susceptible to adversarial attacks | Security breaches, manipulated outputs |
| Concept Drift | Underlying patterns change over time | Outdated model, poor performance |
| Overfitting | Model too specialized to training data | Poor generalization, unreliable in production |
Technology-Related Risk Sources
| Risk Source | Description | Example Risks |
|---|
| Infrastructure Failure | Computing/network infrastructure issues | AI system unavailability |
| Security Vulnerabilities | Technical security weaknesses | Data breaches, system compromise |
| Integration Issues | Problems integrating AI with other systems | System failures, data inconsistencies |
| Scalability Limits | Infrastructure cannot handle demand | Performance degradation, outages |
| Tool/Library Issues | Bugs or vulnerabilities in AI tools | Unexpected behavior, security risks |
Human-Related Risk Sources
| Risk Source | Description | Example Risks |
|---|
| Misuse | AI used outside intended purpose | Harm, liability, compliance violations |
| Over-Reliance | Excessive trust in AI outputs | Uncritical acceptance of errors |
| Under-Reliance | Ignoring valid AI outputs | Missed benefits, inefficiency |
| Skill Gaps | Inadequate user/operator competence | Misoperation, errors, incidents |
| Automation Complacency | Reduced vigilance due to automation | Missed issues, delayed response |
| Social Engineering | Manipulation of AI users | Security breaches, data leakage |
Organizational Risk Sources
| Risk Source | Description | Example Risks |
|---|
| Governance Gaps | Inadequate AI oversight and control | Unmanaged risks, accountability issues |
| Resource Constraints | Insufficient resources for AI management | Inadequate controls, rushed deployments |
| Communication Failures | Poor communication about AI | Misunderstanding, improper use |
| Change Management | Poorly managed AI system changes | Unexpected impacts, incidents |
| Vendor Dependency | Over-reliance on AI vendors | Vendor lock-in, service disruption |
External Risk Sources
| Risk Source | Description | Example Risks |
|---|
| Regulatory Changes | New or changing AI regulations | Compliance gaps, required changes |
| Threat Actors | Malicious actors targeting AI | Attacks, data theft, manipulation |
| Market Changes | Changes in competitive landscape | Obsolescence, competitive disadvantage |
| Public Perception | Negative public view of AI | Reputational damage, adoption resistance |
| Technology Evolution | Rapid AI technology changes | Technical debt, skill gaps |
Using Risk Sources in Assessment
During risk assessment (6.1.2, 8.2):
1. Review Annex C risk source categories
2. Consider each category for your AI systems
3. Identify specific risks relevant to context
4. Assess likelihood and consequence
5. Document identified risks in risk register
6. Use as checklist to ensure comprehensive coverage
Key Takeaways - Annex C
1. Annex C is informative (guidance, not mandatory)
2. Use AI objectives when setting your AIMS objectives
3. Use risk sources as checklist during risk assessment
4. Categories cover data, model, technology, human, organizational, and external
5. Tailor to your specific context and AI systems
6. Annex C helps ensure comprehensive risk identification