Breaking News
Smarsh UK Study Shows AI Communications Surge Across Financial Services as Compliance Gaps Emerge
WHY THIS MATTERS
The rapid shift from AI experimentation to daily operational dependency in the UK financial sector has outpaced traditional compliance infrastructure. Smarsh’s data reveals that while 61% of professionals use generative AI daily, only 32% believe their firm’s surveillance systems can actually detect the risks within that content. This creates a “shadow AI” problem where regulated communications, including client advice and compliance documentation, are being produced at an unprecedented scale without the “defensible” audit trails required by the Financial Conduct Authority (FCA).
The generational divide in adoption further complicates this risk profile. Younger workers are the most frequent users and, tellingly, the most concerned about the lack of oversight. However, the finding that 81% of staff would feel more confident if their AI outputs were monitored indicates that employees are not seeking to bypass regulation; they are looking for a “safety net” that allows them to innovate without individual liability. For firms, this means that robust AI governance is no longer just a hurdle to growth, but a necessary prerequisite for employee confidence and institutional resilience.
New research from Smarsh®, the global leader in communications data and intelligence, reveals that generative AI has rapidly evolved from an emerging tool to a fixture of daily working life for UK financial services and insurance professionals – with 61% now using it every day. However, as AI tools become embedded in all areas of work, organisations’ are struggling to keep pace with the need to monitor and govern the resulting content, creating new and potentially significant compliance risks.
AI moves from experimentation to daily workflow
While younger workers (aged 25-34) are driving this change, with over 36% using AI tools multiple times a day, there is clear cross-generational uptake. Nearly a third (32%) of 35–54-year-olds are using AI tools daily, while over a quarter (28%) of 55–64-year-olds say the same. The consequence is a significant increase in the volume of business communications, with 69% saying it is increasing the amount of content they produce.
Crucially, the study found that AI is not being used solely for administrative convenience. While it is widely deployed for internal tasks such as briefing notes (50%), call summaries (49%), and internal communications (37%), it is also being applied to external and regulated content – including client and customer communications (40%), marketing and social media content (38%) and, notably, compliance documentation (34%). The scale and scope of this output presents a material risk for firms, particularly given that fewer than half (41%) of professionals say they thoroughly review and make significant edits to AI-generated outputs before they are sent or published.
Oversight gaps are emerging as AI scales, but workers see compliance as an enabler
Despite this proliferation of AI-generated content, the research reveals a significant gap in organisational oversight. Fewer than a third (32%) of financial services professionals believe their organisation’s surveillance systems are fully equipped to detect risks in AI-generated content. This concern is felt most acutely among younger professionals (43% of those aged 25-34), the same demographic that is driving AI usage, suggesting that those most active in producing AI-generated content are also the most aware of the compliance blind spots it creates.
However, the findings also point to an important opportunity for firms. The majority of financial services professionals (81%) say they would feel more confident using AI tools for work-related tasks if they knew the outputs were monitored by their organisation correctly – a 12% rise compared to when the same question was asked a year ago. This sentiment was also felt most strongly amongst younger professionals (87% of those aged 18-34). Far from being resistant to oversight, employees – especially those most actively using AI – are calling for it.
“Financial institutions are rapidly adopting generative AI to meet growing demands for faster, more personalized client engagement—but this shift is creating an unprecedented volume and complexity of communications,” said Paul Taylor, Vice President of Product at Smarsh. “Compliance leaders are now under pressure to ensure every AI-assisted interaction is transparent, supervised, and defensible. Firms need the ability to capture and govern these communications across all channels, or they risk introducing critical blind spots at a time when regulatory scrutiny is intensifying. Getting this right isn’t just about risk mitigation—it’s about enabling innovation with confidence.”
FF NEWS TAKE
The Smarsh research highlights a critical paradox: AI is being used to write the very compliance documents meant to govern it, yet fewer than half of those outputs are being thoroughly reviewed by humans. In a 2026 regulatory environment where the FCA is already using its own generative AI to triage intelligence and detect market harm, firms cannot afford to rely on legacy “keyword” monitoring. The regulator’s ongoing “Mills Review” and the upcoming April 2027 evaluation of AI live testing suggest that the window for “voluntary” AI governance is closing, soon to be replaced by mandatory accountability under the Senior Managers and Certification Regime (SM&CR).
As we move toward “agentic AI” where systems don’t just draft emails but execute workflows, the need for “Compliance by Design” becomes an existential priority. Firms that treat AI monitoring as a secondary administrative task are effectively creating a massive, unmonitored communication perimeter that regulators will eventually scrutinize. The true leaders in the 2026 fintech landscape will be those who provide their teams with “governed innovation”—where AI tools are fully integrated into the firm’s capture and archive systems, turning a potential compliance blind spot into a transparent, data-driven advantage.
How is your firm currently balancing the demand for AI-driven speed with the requirement for a “human-in-the-loop” review process?
People In This Post
Companies In This Post
- MPE 2026: Matteo Gamba on Agentic E-commerce and Minimizing Payments Complexity Read more
- FCA Reviewing Whether APRs Support Consumers’ Choices Read more
- Traffic Data Collection System Analysis: How Urban Mobility Data is Transforming Infrastructure Investment in 2026 Read more
- BridgeWise Partners with X to Deliver Institutional-Grade Social Sentiment for Global Markets Read more
- Chubb Partners With Insify to Launch Digital Disability Insurance Read more

