Annex A Controls: Information for Interested Parties (A.8)
Detailed guidance on implementing Annex A controls for AI transparency and communication (A.8), covering stakeholder information, documentation, and explainability with 4 controls.
Chapter Overview
This chapter covers the Information for Interested Parties domain (A.8), which ensures organizations communicate appropriately about AI systems to stakeholders. This includes transparency, user documentation, and explainability. This domain contains 4 controls.
A.8 Information for Interested Parties of AI Systems
Transparency and communication are essential for building trust in AI systems.
Stakeholders need information about AI systems to:
• Make informed decisions
• Understand how they're affected
• Exercise their rights
• Trust the organization
• Comply with regulations (EU AI Act requires transparency)
A.8.2 Communication of Information to Interested Parties
| Attribute | Details |
|---|---|
| Control | Relevant information about AI systems shall be communicated to interested parties. |
| Purpose | Ensure stakeholders have necessary information |
| Related Clause | 7.4 (Communication) |
Implementation Guidance
- Identify information needs of different stakeholders
- Determine appropriate communication channels
- Develop communication materials
- Ensure accessibility of information
- Maintain communication records
- Update communications when AI systems change
Stakeholder Information Needs
| Stakeholder | Information Needs | Channels |
|---|---|---|
| Users | How to use, limitations, support contacts | User guides, help systems, training |
| AI Subjects | That AI is being used, how decisions are made, rights | Notices, disclosures, privacy policies |
| Regulators | Compliance information, technical details | Reports, documentation, audit access |
| Customers | AI capabilities, limitations, data practices | Contracts, documentation, websites |
| Public | AI governance approach, responsible AI commitment | Website, annual reports, press |
• What information do you communicate about AI systems?
• How do you identify stakeholder information needs?
• Show me communications for [specific stakeholder group]
• How do you ensure communications are accessible?
• How do you update communications when AI systems change?
A.8.3 User Documentation
| Attribute | Details |
|---|---|
| Control | Documentation for users shall be provided according to defined requirements. |
| Purpose | Enable effective and appropriate use of AI systems |
| Related Clause | 7.5 (Documented information) |
Implementation Guidance
- Define documentation requirements for each AI system
- Create user-appropriate documentation
- Include usage instructions, limitations, and warnings
- Provide training materials where needed
- Keep documentation current
- Make documentation accessible
User Documentation Content
| Section | Content |
|---|---|
| Purpose | What the AI system does and its intended use |
| Instructions | How to use the system correctly |
| Inputs/Outputs | What inputs are needed, what outputs are produced |
| Limitations | What the system cannot do, known constraints |
| Warnings | Potential risks, misuse scenarios to avoid |
| Support | How to get help, report issues |
| Updates | How users are notified of changes |
Effective user documentation:
• Written for the target audience (not too technical)
• Includes practical examples
• Clearly states limitations
• Easy to find and access
• Kept up to date
• Available in appropriate languages
• What user documentation exists for AI systems?
• Show me documentation for [specific AI system]
• How do you determine documentation requirements?
• How do you keep documentation current?
• How do users access documentation?
A.8.4 Information Regarding AI Interaction
| Attribute | Details |
|---|---|
| Control | Interested parties interacting with or subject to AI system decisions shall be informed of their interaction with the AI system, as appropriate. |
| Purpose | Ensure people know when AI is involved |
| Related Clause | 7.4 (Communication) |
Implementation Guidance
- Identify where AI interaction disclosure is needed
- Determine appropriate disclosure methods
- Implement disclosure mechanisms
- Ensure disclosures are clear and understandable
- Document disclosure practices
- Monitor compliance with disclosure requirements
AI Interaction Disclosure Scenarios
| Scenario | Disclosure Approach |
|---|---|
| Chatbot/Virtual Assistant | "You are chatting with an AI assistant" |
| Automated Decision | "This decision was made using automated processing" |
| AI-Generated Content | Label indicating AI generation |
| AI Recommendation | "This recommendation is AI-generated" |
| Biometric AI | Notice of AI-based biometric processing |
EU AI Act requires disclosure when:
• Interacting with AI systems (e.g., chatbots)
• Subject to emotion recognition or biometric categorization
• Exposed to AI-generated/manipulated content (deepfakes)
Disclosure must be "clear and distinguishable" at the point of interaction.
• How do you inform people when they interact with AI?
• Show me AI interaction disclosures
• What triggers the need for disclosure?
• How do you ensure disclosures are clear?
• How do you handle regulatory disclosure requirements?
A.8.5 Information for Achieving Explainability
| Attribute | Details |
|---|---|
| Control | Information for achieving explainability of AI system outputs and the AI system's functioning shall be documented according to defined requirements. |
| Purpose | Enable understanding of how AI systems work and make decisions |
| Related Clause | 7.5 (Documented information) |
Implementation Guidance
- Define explainability requirements for each AI system
- Determine appropriate level of explanation
- Implement explainability mechanisms
- Document how explanations are generated
- Validate explanations are accurate and useful
- Make explanations accessible to intended audience
Levels of Explainability
| Level | Description | Audience |
|---|---|---|
| System-level | Overall system purpose and approach | General stakeholders |
| Model-level | How the model works, key factors | Technical reviewers |
| Decision-level | Why a specific output was produced | Users, affected individuals |
| Feature-level | Which inputs influenced the output | Technical users, auditors |
Explainability Techniques
| Technique | Description | Use Case |
|---|---|---|
| Feature Importance | Rank inputs by influence on output | Understanding key drivers |
| SHAP Values | Contribution of each feature to prediction | Individual decision explanation |
| LIME | Local interpretable explanations | Explaining specific predictions |
| Counterfactuals | What would change the outcome | Actionable insights |
| Decision Trees | Rule-based explanation extraction | Simplified logic representation |
| Attention Maps | What the model focused on | Image/text model explanations |
Document for each AI system:
• Explainability requirements (who needs what level)
• Explainability approach and techniques used
• Limitations of explanations
• How explanations are generated
• How explanations are presented to users
• Validation of explanation accuracy
• What are your explainability requirements?
• How do you explain AI decisions?
• Show me explanation mechanisms for [AI system]
• How do you validate explanations are accurate?
• What explainability techniques do you use?
• How do you document explainability?
Control Implementation Summary
| Control | Key Evidence | Common Gaps |
|---|---|---|
| A.8.2 Communication | Communication materials, stakeholder analysis | Ad-hoc communication only |
| A.8.3 User Documentation | User guides, help systems, training materials | Documentation incomplete or outdated |
| A.8.4 AI Interaction | Disclosure notices, labeling mechanisms | No disclosure when AI is used |
| A.8.5 Explainability | Explainability documentation, technique evidence | Black box with no explanations |
1. Communication should be tailored to different stakeholder groups
2. User documentation must include limitations and warnings
3. People must be informed when interacting with AI (regulatory requirement)
4. Explainability requirements vary by audience and context
5. Explanations must be documented, accurate, and accessible
6. Transparency builds trust and supports compliance