Annex A Controls: Policies & Internal Organization (A.2-A.3)
Detailed guidance on implementing Annex A controls for AI policies (A.2) and internal organization (A.3), including 6 controls with audit questions and implementation tips.
Chapter Overview
This chapter covers the first two control domains in Annex A: Policies for AI (A.2) and Internal Organization (A.3). These foundational controls establish governance structures and accountability for AI management.
Understanding Annex A
Annex A contains 39 controls across 9 domains:
• A.2 Policies for AI (2 controls)
• A.3 Internal Organization (4 controls)
• A.4 Resources for AI Systems (4 controls)
• A.5 Assessing Impacts (4 controls)
• A.6 AI System Life Cycle (12 controls)
• A.7 Data for AI Systems (5 controls)
• A.8 Information for Interested Parties (4 controls)
• A.9 Use of AI Systems (3 controls)
• A.10 Third-Party Relationships (3 controls)
A.2 Policies for AI
This domain establishes the policy framework for AI governance with 2 controls.
A.2.2 AI Policy
| Attribute | Details |
|---|---|
| Control | Policies for AI shall be defined, approved by management, published, communicated to relevant personnel and relevant interested parties, and acknowledged. |
| Purpose | Establish management direction and commitment for AI governance |
| Related Clause | 5.2 (AI Policy) |
Implementation Guidance
- Develop AI policy aligned with organizational strategy
- Include responsible AI principles
- Define scope and applicability
- Obtain formal management approval
- Publish on accessible platforms (intranet, website)
- Communicate through multiple channels
- Implement acknowledgment mechanism (training sign-off, digital acceptance)
Evidence Examples
- Approved AI policy document with signatures
- Publication records (intranet, website)
- Communication records (emails, training)
- Acknowledgment records (signed forms, system logs)
• Show me your AI policy
• Who approved the policy and when?
• How is the policy communicated to personnel?
• How do staff acknowledge the policy?
• How is the policy made available to external interested parties?
A.2.3 Review of the Policies for AI
| Attribute | Details |
|---|---|
| Control | Policies for AI shall be reviewed at planned intervals or if significant changes occur to ensure their continuing suitability, adequacy, and effectiveness. |
| Purpose | Ensure policies remain current and effective |
| Related Clause | 9.3 (Management Review) |
Implementation Guidance
- Define review frequency (typically annual)
- Establish triggers for unscheduled reviews (regulatory changes, incidents, organizational changes)
- Assign review responsibility
- Document review process and outcomes
- Update policies based on review findings
- Re-communicate updated policies
Review Triggers
| Trigger Type | Examples |
|---|---|
| Planned | Annual review cycle |
| Regulatory | New AI regulations (EU AI Act) |
| Incident | Significant AI incidents |
| Organizational | Mergers, restructuring, strategy changes |
| Technology | New AI capabilities, significant system changes |
| Audit | Findings from internal/external audits |
• When was the AI policy last reviewed?
• What is your planned review frequency?
• What triggers an unscheduled policy review?
• Show me evidence of the last review
• What changes were made as a result of the review?
A.3 Internal Organization
This domain establishes organizational structures and responsibilities for AI governance with 4 controls.
A.3.2 Roles and Responsibilities
| Attribute | Details |
|---|---|
| Control | Roles and responsibilities relevant to the development, provision, or use of AI systems shall be defined and allocated according to organization needs. |
| Purpose | Ensure clear accountability for AI activities |
| Related Clause | 5.3 (Roles, responsibilities and authorities) |
Implementation Guidance
- Identify all AI-related roles
- Define responsibilities for each role
- Document in job descriptions or responsibility matrix
- Allocate roles to specific individuals
- Communicate assignments
- Review when organizational changes occur
Key AI Roles
| Role | Typical Responsibilities |
|---|---|
| AIMS Owner | Overall AIMS accountability, reporting to management |
| AI System Owner | Accountability for specific AI system governance |
| AI Risk Owner | Ownership of AI-related risks |
| AI Developer | Development according to standards and controls |
| Data Owner | Quality and governance of AI training data |
| AI Ethics Lead | Responsible AI principles and guidance |
| AI Auditor | Independent assessment of AI systems and AIMS |
• How are AI roles and responsibilities defined?
• Show me documentation of AI-related responsibilities
• Who is the owner of [specific AI system]?
• How do personnel know their AI responsibilities?
• How are responsibilities updated when changes occur?
A.3.3 Reporting
| Attribute | Details |
|---|---|
| Control | Personnel shall report observed or suspected AI-related incidents, vulnerabilities, or risks following defined procedures. |
| Purpose | Enable timely identification and response to AI issues |
| Related Clause | 7.4 (Communication) |
Implementation Guidance
- Establish reporting procedures
- Define what should be reported (incidents, near-misses, vulnerabilities, risks)
- Provide multiple reporting channels (email, portal, hotline)
- Enable anonymous reporting if appropriate
- Train personnel on reporting procedures
- Acknowledge and track reports
- Protect reporters from retaliation
Reportable Events
| Category | Examples |
|---|---|
| Incidents | AI system failures, incorrect outputs, security breaches |
| Near-misses | Potential failures caught before impact |
| Vulnerabilities | Discovered weaknesses, potential attack vectors |
| Risks | Newly identified risks, changing risk levels |
| Concerns | Ethical concerns, bias observations, compliance issues |
• How do personnel report AI-related incidents?
• What reporting channels are available?
• Show me your incident reporting procedure
• How are reports tracked and responded to?
• Are personnel trained on what and how to report?
A.3.4 Authorities
| Attribute | Details |
|---|---|
| Control | Responsibilities and authorities for handling AI-related events shall be defined. |
| Purpose | Enable effective response to AI events |
| Related Clause | 5.3 (Roles, responsibilities and authorities) |
Implementation Guidance
- Define authority levels for AI decisions
- Establish escalation paths
- Define who can approve AI deployments
- Define who can halt or modify AI systems
- Document decision-making authority
- Ensure authorities match responsibilities
Authority Examples
| Decision/Action | Authority Level |
|---|---|
| Approve new AI system deployment | AI Governance Committee / CTO |
| Emergency AI system shutdown | AI System Owner / On-call Lead |
| Accept AI risk above threshold | Executive Risk Committee |
| Approve AI policy changes | Top Management / Board |
| Approve AI vendor selection | Procurement + AI Governance |
• Who has authority to approve AI deployments?
• Who can shut down an AI system in an emergency?
• How are authorities documented?
• What is the escalation path for AI events?
• Show me evidence of authority exercised for a recent event
A.3.5 Coordination
| Attribute | Details |
|---|---|
| Control | Personnel involved in AI systems within the organization shall coordinate their activities. |
| Purpose | Ensure effective collaboration across AI activities |
| Related Clause | 7.4 (Communication) |
Implementation Guidance
- Establish coordination mechanisms (meetings, forums, committees)
- Define interfaces between teams (development, operations, risk, compliance)
- Create AI governance committee or working group
- Implement collaboration tools and platforms
- Share lessons learned across teams
- Coordinate change management across dependent systems
Coordination Mechanisms
| Mechanism | Purpose | Frequency |
|---|---|---|
| AI Governance Committee | Strategic oversight, policy decisions | Monthly/Quarterly |
| AI Working Group | Operational coordination, issue resolution | Weekly/Bi-weekly |
| Cross-functional Reviews | AI system reviews with multiple stakeholders | Per project/release |
| Incident Response Team | Coordinate response to AI incidents | As needed |
| Community of Practice | Share knowledge, best practices | Monthly |
• How do AI teams coordinate their activities?
• What governance committees or forums exist?
• How do development and operations teams coordinate?
• How are cross-functional dependencies managed?
• Show me evidence of coordination activities
Control Implementation Summary
| Control | Key Evidence | Common Gaps |
|---|---|---|
| A.2.2 AI Policy | Approved policy, communication records, acknowledgments | No acknowledgment process |
| A.2.3 Policy Review | Review records, change history | No defined review schedule |
| A.3.2 Roles | Role definitions, assignments, job descriptions | Roles not formally documented |
| A.3.3 Reporting | Reporting procedure, training records, incident logs | No reporting mechanism |
| A.3.4 Authorities | Authority matrix, approval records | Authorities not documented |
| A.3.5 Coordination | Meeting records, committee terms of reference | No formal coordination |
1. AI policy must be approved, published, communicated, AND acknowledged
2. Policies require planned reviews and trigger-based reviews
3. All AI-related roles must be defined and allocated
4. Reporting procedures enable personnel to flag AI issues
5. Authorities must be defined for AI decisions and events
6. Coordination ensures effective collaboration across teams