FRC Guidance: Enhancing Audit Quality with AI & Guidenet.ai's RACM
This comprehensive guide integrates the Financial Reporting Council's insights on implementing and documenting AI tools in audit procedures, fostering innovation while upholding professional standards. It also introduces Guidenet.ai's powerful Risk and Control Matrix (RACM).
Introduction: AI's Growing Role in Audit
The use of artificial intelligence (AI) in audit procedures has been hypothesized for many years. This prospect is fast becoming a reality, we have begun to see tools that use AI being deployed on live engagements, and many more such tools are in development.
The FRC encourages innovation and believes AI, deployed responsibly and appropriately, has the potential to significantly enhance audit quality. Higher audit quality supports greater trust in UK companies' financial reporting, reducing the risk premium the market may charge them to access capital, and therefore improving their competitiveness and ability to grow.
In this publication, the term AI refers to a broad range of systems, comprising both traditional machine learning techniques and deep learning models, including generative AI.
Scope and Purpose of This Guide
This publication comprises two main parts: an illustrative example of a potential use case of AI to enhance procedures over journals, and guidance on documenting tools that use AI.
The material in this publication is not prescriptive and does not represent a static set of FRC expectations. The FRC recognises that this field is moving quickly and will continue to engage across the profession, both in the UK and internationally, to ensure our standards and guidance remain appropriate.
It's important to note that auditors will take at least the same expectations outlined in this document when they encounter use of AI systems by management in financial reporting. This ensures consistent standards are applied whether AI is used by auditors or by the entities they audit.
Illustrative Example: AI-Enhanced Journal Testing
Background
XYZ LLP has been the auditor of ABC PLC, a listed retail business, for several years. The firm has recently completed in-house development of a technology-enabled tool to enhance fraud procedures, leveraging AI to identify potentially anomalous items unusual relative to the population.
Purpose
While traditional fraud procedures often consist of filtering journals by rules-based criteria, this tool allows identification of more subtle patterns that may indicate risk, enhancing the quality of the procedure. The tool has progressed through limited deployment and is now mandated on qualifying engagements.
Development of the AI Tool
Model Selection
The firm considered two main options: an unsupervised machine learning model that applies statistical techniques to identify unusual items, and a deep learning model (neural network) trained on large quantities of data to recognize patterns.
Explainability
The firm determined users should understand why transactions were flagged as unusual, allowing audit teams to design responsive procedures. This required either selecting an inherently explainable model or augmenting a complex model with explainable AI techniques.
Data Usage
The firm used a combination of real and synthetic data for testing and calibration, ensuring appropriate authorization was obtained and data processing met legal and regulatory requirements.
Methodology Development
The development of the methodology supporting the tool occurred concurrently with the tool itself, with representatives from the firm's central methodology team included on the project leadership. This promoted collaboration between methodology and technology experts, ensuring the final tool embedded well into the firm's methodology.
Combined Approach
The firm decided to combine the AI tool with traditional rules-based techniques rather than replacing them entirely. This addresses the limitation that the AI tool only identifies items unusual relative to the population, potentially missing consistently posted unusual transactions.
Calibration Process
Significant professional judgment was required to calibrate how much weight each routine should contribute to identifying riskier transactions and setting thresholds. This process required both theory and experimentation with data to ensure a robust approach.
Using the AI Tool in Practice
Eligibility Assessment
The team begins by determining if criteria for mandatory tool use are met: sufficient data quality (complete and accurate general ledger data with required fields) and whether additional evidence is needed to address fraud risk.
Tool Execution
The team documents that criteria are met and runs the tool, which identifies journals deemed high risk. The tool is integrated into audit software with controls ensuring only the latest approved version is used.
Follow-up Procedures
The team follows up on flagged items, considering why transactions were identified as potentially anomalous to determine appropriate evidence-gathering procedures.
Systemic Assessment
The methodology requires teams to be alert for information indicating the tool's assessment may be systemically flawed in the engagement context.
Documentation Guidance: Central Documentation
The FRC has provided comprehensive guidance on what should be documented centrally by firms regarding AI tools, whether developed in-house or obtained from third parties.
Tool Description and Function
Explanation of what the tool does conceptually, its objective, and the nature of the underlying technology in broad terms.
Appropriate Use Criteria
The criteria that should be met for tool use to be appropriate, including data characteristics, transaction categories, and business model considerations.
Development Process
The rationale for development, standards compliance, data sources and permissions, model selection and architecture, training approach, and version history.
For third-party tools, firms may need to rely on independent assurance that the tool operates as intended when full development information isn't available.
Documentation Guidance: Additional Central Elements
Quality Assurance
The governance architecture around development and operation, and key steps in any certification process including operational testing.
Training and Support
Available materials on appropriate use, operation, and output interpretation, including strategies to mitigate automation bias.
Explainability Design
How the tool was designed to be appropriately explainable, recognizing that appropriate levels of explainability vary based on intended use.
AI Principles Alignment
Documentation of how the tool aligns with the 5 government AI principles: safety/security/robustness, transparency/explainability, fairness, accountability/governance, and contestability/redress.
Documentation on the Audit File
For Automated Tools and Techniques (ATTs) that use AI
The FRC guidance outlines key material that should be documented on the audit file for AI-powered ATTs. As a guiding principle, the more widely used a tool is across engagements, the more documentation can shift toward central repositories.
Tool Description
Brief explanation of what the tool does conceptually, its objective, version number, and any team-specific configuration or modifications.
Appropriateness Assessment
The team's assessment against centrally determined criteria, particularly how they ensured input data completeness and accuracy.
Approval Evidence
Evidence of approval from relevant central functions (unless centrally documented for universally approved tools).
Output Consideration
How the team used the outputs to conclude on relevant judgments or inform further procedures.
Other AI Tools in Audit
For AI tools that aren't used to perform audit procedures directly, there may be no requirement to document their use on the audit file if not needed for an experienced auditor to understand the basis for the auditor's report or significant matters. However, teams may choose to document when it would help reviewers better understand the work performed.
Examples of Other AI Tools
  • Tools that create first drafts of workpapers
  • Tools that review work to identify omissions or inconsistencies
  • Chatbots that teams can use to query the firm's methodology
Documentation Considerations
While formal documentation requirements may be limited, teams should consider whether documenting the use of these tools would:
  • Enhance transparency about how work was performed
  • Provide context for reviewers
  • Support quality control processes
Regulatory Context and AI Principles
The FRC's expectations are informed by the regulatory environment with respect to AI, including the government's 5 AI principles. These principles provide a framework for responsible AI development and use:
Safety, Security and Robustness
Ensuring AI systems operate reliably, securely, and as intended even in unexpected situations or when facing attempts to compromise them.
Transparency and Explainability
Making AI systems understandable to users and those affected by them, with appropriate levels of disclosure about how decisions are made.
Fairness
Developing and using AI systems that are inclusive and accessible, avoiding unfair bias or discrimination against individuals or groups.
Accountability and Governance
Establishing clear responsibility and oversight for AI systems throughout their lifecycle, from development to deployment and use.
Contestability and Redress
Providing mechanisms for people to challenge AI-based decisions that affect them and seek correction or redress when needed.
Auditing Entities Using AI in Financial Reporting
Auditors will take at least the same expectations outlined in this document when they encounter use of AI systems by management in financial reporting. This ensures consistent standards whether AI is used by auditors or by the entities they audit.
Understanding the System
Auditors need to understand how management's AI tools function, their purpose, and how they impact financial reporting.
Evaluating Controls
Assessment of management's governance and controls over AI systems, including development, testing, and ongoing monitoring.
Testing Outputs
Procedures to test the reliability and appropriateness of outputs from management's AI systems that affect financial statements.
When auditing entities using AI in their financial reporting processes, auditors should apply professional skepticism and consider whether additional specialized skills or knowledge are needed on the engagement team.
Balancing Innovation and Professional Standards
The FRC encourages innovation while ensuring adherence to professional standards. This balance is critical for maintaining audit quality while embracing technological advancement.
Supporting Innovation
The FRC recognizes that AI has the potential to significantly enhance audit quality when deployed responsibly. The guidance aims to support firms in implementing innovative approaches while providing clarity on expectations.
The material is not prescriptive and does not represent a static set of expectations, acknowledging the rapidly evolving nature of AI technology.
Maintaining Standards
While encouraging innovation, the FRC emphasizes that the fundamental requirements of auditing standards remain unchanged. AI tools must support auditors in meeting these standards, not replace professional judgment.
Documentation requirements aim to be proportionate, recognizing that over-documentation can divert resources from areas where they can better enhance audit quality.
Introducing the Risk and Control Matrix (Not part of FRC guidance but developed by Guidenet.ai)
To support consistent, high-quality application of the FRC guidance, this comprehensive Risk and Control Matrix (RACM) translates expectations and audit standard obligations into a practical control framework.
The RACM is structured to facilitate seamless integration of AI-related controls across audit engagements, encompassing 15 key controls strategically grouped across five critical domains of AI governance and assurance.
1
Governance and Oversight
Ensuring clear roles, responsibilities, and ethical considerations for AI deployment.
2
Data Quality and Integrity
Controls over the completeness, accuracy, and relevance of data used by AI systems.
3
Model Development and Validation
Processes for designing, building, testing, and validating AI models for intended use.
4
Deployment and Monitoring
Controls for secure deployment, continuous monitoring, and performance evaluation of AI in operation.
5
Documentation and Reporting
Requirements for comprehensive documentation and transparent reporting on AI system design, use, and outcomes.
Who Benefits from the RACM?
The Risk and Control Matrix (RACM) serves as a vital resource for a diverse group of stakeholders, ensuring clarity and consistency in the application of AI within audit practices. Its comprehensive framework is tailored to meet the needs of:
Audit Engagement Teams
Providing practical guidance for the responsible use of AI-enabled tools in daily audit procedures.
Quality Reviewers & EQCRs
Facilitating consistent and effective oversight of AI integration and related controls.
Methodology Leaders
Supporting the development and update of audit frameworks to incorporate AI effectively.
Risk & Compliance Professionals
Enabling robust monitoring and assurance design concerning AI risks in audit.
Regulators & Stakeholders
Offering clear insights into AI governance and accountability within the audit profession.
Risk and Control Matrix (RACM) Details: Essential Controls for AI in Audit
Continuing our deep dive into the Risk and Control Matrix, this section outlines critical controls for integrating AI responsibly into audit processes. These controls ensure robust governance, explainability, and human oversight, maintaining audit quality and integrity.
Explore Further Resources
We've covered the critical aspects of designing and implementing a robust AI Control Framework. This comprehensive guide provides a foundation for mitigating risks, ensuring compliance, and fostering responsible AI innovation within your organization.
To deepen your understanding and access the full suite of resources, including detailed framework documentation and practical tools, visit our dedicated platform.
Disclaimer
The content presented in this publication is provided for informational and professional development purposes only. It is intended to reflect the authors’ professional judgment and insights at the time of publication and does not constitute legal, regulatory, audit, compliance, or other professional advice.
This material may reference or draw conceptual alignment from publicly available frameworks and guidance, including but not limited to the NIST AI Risk Management Framework, ISO/IEC 42001, the EU Artificial Intelligence Act, the UK Corporate Governance Code, and other relevant standards. Such references are provided solely for contextual purposes. GuideNet.ai makes no representation or claim of compliance with, endorsement by, or affiliation to any such frameworks, regulatory bodies, or standards-setting organisations.
Users of this publication are solely responsible for evaluating its applicability in the context of their specific organisational, regulatory, legal, and operational environments. Any actions or decisions taken based on this material are undertaken at the user’s own risk. The authors and GuideNet.ai expressly disclaim any liability for loss or damage arising from reliance on or implementation of the content herein.
All rights, including copyright and intellectual property rights, in and to this publication are reserved. No part of this material may be copied, reproduced, distributed, adapted, or disclosed to third parties in any form or by any means without the prior written consent of GuideNet.ai.