In collaboration with Insig AI, Falcon Windsor presents a research paper titled “Your Precocious Intern,” aimed at providing a practical model for the responsible use of AI in corporate reporting. This paper analyzes AI adoption across UK companies, specifically focusing on the FTSE 350, and reveals the rising use of generative AI tools in reporting without adequate oversight or training. The goal of the report is to address the potential risks generative AI poses to corporate reporting and offer actionable guidance for managing AI’s integration into financial disclosures.
1. Growing Use of AI Without Adequate Training
- Adoption Trends: Generative AI is becoming more prevalent across UK companies, though often with little formal training or structured policies.
- Training Deficiency: Companies, particularly larger ones, are still in the early stages of AI adoption without sufficient resources dedicated to training staff on its effective use.
- Impact on Reporting: Although AI is helpful, the absence of formal training could undermine the accuracy of financial reporting.
2. Investor Concerns on Accuracy and Authorship
- Investor Expectations: Investors acknowledge the potential of AI to handle large volumes of data and improve reporting efficiency but demand assurances on the accuracy and human authorship of reports.
- Risks Identified: Concerns include uniformity in AI-generated content, the perception that leadership is disengaged, and the potential for misuse through buzzwords or poorly sourced information.
- Importance of Human Oversight: Investors expect that management’s judgment and forward-looking narratives remain human-driven, not machine-generated.
3. Urgency to Integrate AI Thoughtfully
- Short Window for Action: With AI adoption still in early stages, companies have a unique opportunity to integrate AI responsibly before it becomes more deeply embedded.
- Strategic Adoption: There’s a need for clear guidelines, training, and transparency to ensure AI is used effectively while maintaining the integrity and trustworthiness of corporate reporting.
4. Regulatory Guidance on AI Use
- Desire for Clarity: While companies do not want additional regulation, there is a need for guidance from regulatory bodies such as the Financial Reporting Council (FRC) or the FCA on how AI affects corporate reporting and directors’ duties.
- Legal Reassurance: A clear stance from regulators on the use of AI would help companies balance innovation with their fiduciary responsibilities.
Recommendations: How to Manage AI in Reporting
- Treat AI Like a ‘Precocious Intern’: AI is useful, but untrained and prone to mistakes. It should be closely supervised, and its outputs must be checked by human experts.
- Implement Formal Training Programs: Companies must invest in comprehensive training on using generative AI for reporting, ensuring staff understand how to leverage the tool responsibly.
- Clear Disclosure: In the short term, reports should clarify whether or not AI has been used. Over time, companies should establish clear policies on AI’s role in reporting, especially concerning forward-looking statements and management opinions.
The growing integration of AI into corporate reporting presents opportunities to enhance efficiency but also significant risks to the accuracy, authenticity, and trust that financial disclosures are built upon. By following a responsible approach, including training, clear policies, and transparency, companies can ensure that generative AI supports rather than undermines the reporting process.