VERSION 1.0
GuideNet Enterprise AI Control Architecture
Translating AI governance expectations into implementable, auditable, proportionate enterprise controls.
12
Control Domains
88
Defined Control Objectives
With built in Forward and Reverse mapping with Regulatory and key frameworks
12 Control Domains
A comprehensive, framework-aligned architecture spanning the full AI governance lifecycle — from oversight and data integrity to ethics, resilience, and auditability.
  • 88 defined control objectives
  • Forward & reverse regulatory mapping
  • Aligned to NIST, ISO 42001, EU AI Act, GDPR
  • ICFR-relevant controls embedded throughout
The control architecture adapts to maturity
Foundational Governance
Designed for progressive adoption across AI maturity stages — establishing foundational governance and use case control.
Embedding Monitoring & Auditability
Embedding monitoring, auditability, and ICFR-aligned assurance as AI scales.
Framework Modelling and Mapping
Core Principles
Every regulatory obligation is traceable to a control.
Every control is defensible against regulation.

The architecture is structurally modelled against:
  • NIST AI RMF
  • ISO 42001
  • GDPR
  • EU AI Act
  • OECD and UK AI principles
  • ICFR Relevance
It incorporates:
Forward Mapping
GuideNet control objectives aligned to framework clauses and regulatory articles.
Reverse Mapping
Regulatory obligations traceable back to specific enterprise control objectives.
Designed to plug into Compliance, ICFR and audit environment
The GuideNet Enterprise AI Control Architecture can:
Operate as a standalone AI control framework
Integrate into ICFR and Tech controls environment
Extend model risk management structures
Anchor AI assurance programmes
Provide a structured basis for mapping to additional regulatory requirements

Version 1.0 establishes a stable enterprise control baseline. Full control architecture and detailed mapping matrix available at GuideNet.ai
Control Architecture Layered Snapshot — Deep Dive
Detailed breakdown of core control objectives across key layers:
1
Governance and Accountability Layer
Domain 01 - Governance and Oversight
GV 06 - Maintain AI Use Case Inventory and Risk Classification: Central register of all AI use cases and models with owner, purpose, criticality, data categories, third parties, ICFR relevance, lifecycle stage, and risk classification. Quarterly certification by owners.
Mapped to:
  • NIST AI RMF - GV.1.5, GV.1.6, GV.2.1
  • ISO 42001 - Clause 4.3, Clause 6.1
  • EU AI Act - Articles 6, 9 and 71
  • OED AI Principle – Accountability
  • OECD: Accountability, Robustness & Safety, Transparency (secondary)
2
Data and Model Integrity Layer
Domain 02 - Data Governance and Lineage
DG 01 - Ensure datasets used for training and testing have documented provenance and lineage: Maintain dataset inventory with source, owner, and purpose; evidence approvals for data ingestion; store lineage records in a central repository accessible to audit teams.
Mapped to:
  • NIST AI RMF - MP.4.2, MP.1.1
  • ISO 42001 - Clause 8.6, 7.5
  • GDPR - Articles 5(2), 14, 30
  • EU AI Act - Article 10
  • OECD: Accountability, Robustness & Safety
3
Monitoring and Performance Management Layer
Domain 03 - Model Transparency and Explainability
MT - 05 - Ensure auditability of AI decisions: Capture decision-level logs with timestamp, inputs, outputs, and model version; protect logs from alteration; provide access to auditors and regulators on request.
Mapped to:
  • NIST AI RMF - ME.2.4, ME.2.8, ME.3.1
  • ISO 42001 - Clause 8.2, 7.5
  • EU AI Act - Article 12, 9
  • OECD AI Principle - Robustness
  • ICFR Relevant
  • OECD: Accountability, Transparency, Robustness (secondary)
Control Environment, Safeguards & Assurance Layers
Control Environment and Safeguards Layer — Domain 05: Security and Access Control
SA 04 - Ensure AI systems have kill-switch/fail-safe protocols: Maintain kill-switch procedures for disabling compromised models; test kill-switch annually; evidence test results in governance logs.
Mapped to:
  • NIST AI RMF – 2.4, 1.3, 4.3
  • ISO 42001 - Clause 8.7, 8.8
  • GDPR - Article 32
  • EU AI Act - Article 14, 15
  • ICFR Relevant
  • OECD: Robustness & Safety, Accountability
Assurance and Evidence Layer — Domain 6: Accountability and Oversight
AC 08 - Ensure a formal go or no-go decision is taken before deployment: A gated readiness review checks intended purpose, acceptance criteria, pilot or sandbox results, risk and control completion, and sign-offs from product owner, risk, legal, and security. Go or no-go outcome recorded in governance minutes and evidence repository.
Mapped to:
  • NIST AI RMF – Map 1.1, 1.5, 1.6, ME 2.4, 2.8
  • ISO 42001 - Clause 8.1, 4.3, 10.2
  • EU AI Act - Article 16, 43, 26
  • GDPR - Article 24
  • ICFR Relevant
  • OECD: Accountability; Robustness & Safety
Assurance and Evidence Layer — Domain 11: Evidence and Auditability
EA 04 – Provide external stakeholders confidence in AI systems and controls: Commission external assurance for high-risk AI systems; scope aligned to NIST/ISO/AI Act; evidence assurance reports and management responses.
Mapped to:
  • NIST AI RMF – ME.1.3, MP 5.1
  • ISO 42001 - Clause 9.2, 9.3
  • EU AI Act - Article 43, 47
  • GDPR – Article 42/43 (where required)
  • ICFR Relevant
  • OECD: Accountability, Transparency
Reverse Mapping Snapshot
From obligation or standard requirement to GuideNet control IDs
NIST AI RMF
Govern 1.6 - Mechanisms are in place to inventory AI systems with resources aligned to priorities.
Reverse mapping to architecture controls:
  • GV-06 → establishes the enterprise AI register and classification
  • MP-05 → ensures lifecycle tracking including retirement (completeness of inventory)
  • GV07 → Establish guardrails for experimentation detect unauthorised or "shadow AI"
ISO 42001
Clause 8.6 - Manage data across the AI lifecycle.
Reverse mapping to architecture controls:
  • DG-01 → ensures lineage and provenance
  • DG-02 → ensures data quality and representativeness
GDPR
Article 25 - Data protection by design and default.
Reverse mapping to architecture controls:
  • DG-02 → Data quality, completeness, and representativeness
  • DG-04 → Data minimisation, anonymisation, lawful processing
  • MT-01 → Ensure AI models are fully documented with design, assumptions, and limitations
EU AI ACT
Article 10 - Ensure training, validation, and testing datasets are relevant, representative, free of errors, and appropriately governed.
Reverse mapping to architecture controls:
  • DG-01 → ensures dataset provenance and traceability
  • DG-03 → ensures bias detection and mitigation
Full Architecture and Detailed Mapping
Request the complete architecture, including forward
and reverse mapping.
Connect and message:
Explore more:
Disclaimer
The content presented in this publication is provided for informational and professional development purposes only. It reflects the authors' professional judgment at the time of publication and does not constitute legal, regulatory, audit, compliance, or other professional advice.
This material may reference or draw conceptual alignment from publicly available frameworks and guidance, including but not limited to the NIST AI Risk Management Framework, ISO/IEC 42001, the EU Artificial Intelligence Act, the UK Corporate Governance Code, and other relevant standards. Such references are provided solely for context. GuideNet.ai and the authors make no representation of compliance with, endorsement by, or affiliation to any such frameworks, regulatory bodies, or standards-setting organisations.
This publication does not represent a complete or definitive set of controls for any organisation, jurisdiction, or regulatory requirement. It is not intended to be relied upon as the sole basis for governance, compliance, audit, or assurance decisions.
Users are responsible for assessing the relevance and applicability of the content in the context of their specific organisational, regulatory, legal, and operational environments. Controls and approaches should be tailored to reflect the nature, scale, and complexity of AI use cases and the existing control environment.
Any actions or decisions taken based on this material are undertaken at the user's own risk. To the fullest extent permitted by law, the authors and GuideNet.ai disclaim all liability for any loss or damage arising from reliance on or use of this publication.
All rights, including copyright and intellectual property rights, in and to this publication are reserved by the authors and GuideNet.ai. No part of this material may be copied, reproduced, distributed, adapted, or disclosed to third parties in any form or by any means without prior written consent.