On 27 January 2026, the UK’s Financial Conduct Authority (FCA) launched the Mills Review into the long-term impact of artificial intelligence (AI) on retail financial services. Led by Executive Director Sheldon Mills, who is responsible for delivering the Consumer Duty and the FCA’s competition obligations, the Review looks beyond current use cases to explore how increasingly advanced and interconnected AI systems could reshape market structure, firms’ operations, consumer trends and regulatory approaches by 2030 and beyond.
In this briefing, we set out our perspective on the most significant – and in some cases under-explored – themes emerging from the Engagement Paper published as part of the Review, focusing on what the Review reveals about the FCA’s current thinking and regulatory direction and what it may mean in practice for firms in the retail financial services space.
The wider policy context: parliamentary scrutiny of AI
The timing of the Mills Review is notable. It comes against a backdrop of growing political and parliamentary scrutiny of AI in financial services, including an inquiry by the Treasury Select Committee into AI’s impact on consumers and financial stability. In its recent report published in January 2026, the Committee acknowledges the considerable benefits of AI – such as faster services and new cyber defences – but places particular emphasis on the associated potential risks. Potential risks identified by the Treasury Select Committee included the risk that AI-driven decision-making in credit and insurance might lack transparency, fraud, unregulated financial advice from AI search engines risks misleading or misinforming customers, financial exclusion from tailoring products and AI financial decision-making, herding behaviour in AI-driven market trading, over-reliance on firms for AI and cloud services, and the increasing scale and sophistication of cyber-attacks.
The Committee is particularly critical of what it characterises as a “wait-and-see” approach by the FCA, the Bank of England and HM Treasury, expressing that they are not doing enough to manage the risks presented by AI and are thus exposing both consumers and the wider financial system to what it terms “potentially serious harm.” The Committee calls on the FCA to provide greater clarity on the application of existing rules to the use of AI. In particular, it urges the FCA to publish, by the end of 2026, comprehensive and practical guidance for firms, covering: (i) the application of consumer protection rules to their use of AI, and (ii) accountability and the level of assurance expected from senior managers under the Senior Managers and Certification Regime (SM&CR) for harm caused through the use of AI.
Viewed in this context, the Mills Review is not an exploratory exercise, but part of the FCA’s response to broader policy concerns about the regulation of AI – balancing regulatory certainty for firms with effective consumer protection and the integrity of the UK financial system, whilst also taking into account the context of fast-moving technological change and innovative solutions.
A 360-degree review of AI in retail financial services
The FCA is taking a holistic approach in the Mills Review, seeking input from a wide spectrum of stakeholders, including financial firms, consumer groups, trade associations, technology providers, politicians and academics. The project team itself brings together economists, technologists, supervisors, policy specialists and consumer experts, who will engage widely and draw on academic research, international developments and responses to the review to inform the FCA’s conclusions.
The Engagement Paper frames its questions around four interrelated themes:
- How AI could evolve in the future, including the development of more autonomous and agentic systems.
- How these developments could affect markets and firms, including changes to competition and market structure and UK competitiveness.
- The impact on consumers and consumer trends, including how consumers will be influenced by AI but also influence financial markets through new expectations or behaviours.
- How financial regulators may need to evolve to continue ensuring that retail financial markets work well.
This holistic approach is likely to be welcomed by the industry. As AI becomes more widely adopted, interconnected and agentic, its potential impact on competition, consumer outcomes and market stability will grow – with each area influencing and amplifying the others. Regulatory frameworks that address risks in isolation will no longer be sufficient.
The Review is focused on retail financial services. While wholesale markets and broader societal impacts are generally out of scope, they will be considered where they have implications for retail markets. The Engagement Paper identifies, by way of example, that the widespread adoption of AI investment tools could increase retail participation in capital markets.
Agentic AI: a potential inflection point for retail finance
The FCA identifies agentic AI – systems capable of autonomous decision-making and action – as a potentially transformative development in retail finance:
“We may be approaching a genuine inflection point in how AI technology interacts with financial services. Advanced, multimodal and agentic AI systems could reshape market dynamics, alter how financial products are designed and distributed, and transform how consumers engage with firms. In some scenarios, there could be rails to enable machine-readable, programmable forms of digital assets (or money) to be exchanged and settled in real-time, with AI potentially providing decision-making autonomously.”
Looking ahead, the Engagement Paper suggests that from around 2030, the AI landscape may be defined by systems that are “more autonomous, adaptive and interconnected than ever before.” This would mark a shift away from discrete use cases towards integrated ecosystems. By 2030, consumers could also increasingly be interacting with financial services through AI-mediated interfaces rather than directly with firms.
The Engagement Paper highlights both the risks and rewards of this transition. On the supply side, agentic AI could deliver significant efficiency gains, including optimised payments, lower servicing costs, hyper-personalised retail propositions, and more automated risk assessment and claims handling. On the demand side, AI agents could increasingly act on consumers’ behalf by constructing investment strategies, comparing products and switching providers, potentially reducing friction and intensifying competition. However, in relation to the demand side, the FCA notes that this could compress margins on traditional advice while raising questions around suitability, transparency and potential market herding if many consumers use similar AI systems.
With regard to the Consumer Duty, the FCA notes that it will consider what “good outcomes” might mean in a world where consumers increasingly delegate financial decisions and their ability to assess whether their AI agents are acting in their interests may be limited.
The regulator is seeking views on the future direction of agentic AI and its implications for retail finance over the coming decade, including questions of accountability, assurance and market structure. It also asks who is likely to control the primary customer relationship by 2030 and beyond – listing incumbent financial services firms, “Big Tech”, specialist AI intermediaries or consumers’ own AI agents – and what this shift would mean for competition. A further focus is customer agency: as decision-making is increasingly delegated to AI, how might this affect consumer understanding, financial literacy and vulnerability?
We expect adoption of agentic AI in financial services to accelerate over the course of 2026 (see our fintech predictions, including on agentic AI trends, here). The Engagement Paper signals that agentic AI is an increasingly important focus for the FCA as it seeks to anticipate and shape the next phase of change in retail financial services. Firms should therefore monitor developments closely and be ready to respond to emerging regulatory guidance in this area.
A systemic view of technology risk
The Engagement Paper points to a more holistic approach by the FCA to AI-related risks, viewing AI not in isolation but as part of a broader technological ecosystem. This includes interactions with distributed ledger technology, open banking, digital identity solutions, and, looking further ahead, quantum computing. These developments take place in the wider context of digital finance, including blockchain and smart contracts, tokenisation and digital assets.
This has two important implications for firms. First, when designing governance frameworks today, firms should recognise that future risks and opportunities are likely to arise from combinations of technologies rather than individual AI tools. AI-driven decision-making layered onto open banking data or embedded within tokenised infrastructure could produce consumer outcomes – both positive and adverse – that are qualitatively different from those seen today.
Second, the FCA highlights the need for greater coordination with other domestic regulators, including the Competition and Markets Authority, the Information Commissioner's Office, the Digital Regulation Cooperation Forum (DRCF) (see our takeaways from the DRCF’s report on its pilot AI & Digital Hub and public call for views on agentic AI here), as well as international bodies such as the International Organisation of Securities Commissions, the Global Financial Innovation Network and the Bank of International Settlements. The FCA is also exploring whether approaches from other regulatory domains, including non-financial services sectors dealing with exponential technologies, could offer useful reference points. Taken together, this positions AI as a cross-cutting policy issue. For firms, this means AI governance frameworks cannot rely on FCA rules and guidance alone. They will need to account for how financial regulation intersects with data protection, competition and wider technology regulation, and ensure accountability across increasingly complex value chains. Firms could do well to look across sectors with a view to determining best practice in implementing AI technologies.
Competition for consumer protection
Perhaps unsurprisingly, given its competition objective, the FCA includes multiple references to what AI might mean for competition in retail financial services. The FCA suggests that by 2030, consumers could increasingly be interacting with financial services through AI-mediated interfaces rather than directly with firms. Over time, richer consumer data – potentially enabled by Open Finance – could support detailed virtual models (“digital twins”) of individuals or organisations, allowing firms to test and improve outcomes in a controlled way. This could also empower AI agents to move beyond “doing things for me” to “acting as me.” The FCA notes that:
“If consumers increasingly delegate financial decisions to AI agents, firms will potentially have to compete in novel ways for the attention of AI systems. This could make markets more competitive, if AI agents require firms to develop novel value propositions. It could, however, create new forms of market power, if AI providers favour certain firms, or if personalisation creates lock-in. These AI-enabled services could also capture significant value from financial firms while remaining outside the regulatory perimeter.”
It remains uncertain who will benefit – incumbents, challengers, or new AI-native entrants. The FCA is therefore seeking views on the drivers of market concentration, whether AI could shift the dominant players in retail finance, and the extent to which efficiency gains would be passed through to consumers as lower prices.
The FCA is assessing the entire AI value chain, including actors outside its regulatory perimeter, such as data providers, platforms and infrastructure providers. The regulator recognises that AI intermediaries are capturing an increasing share of value in retail finance. This could move value chains beyond the regulatory perimeter, or these players may enter financial services directly.
The use of AI providers has practical implications for firms, including under the Consumer Duty. Retail consumer outcomes may increasingly be shaped not only by authorised firms, but also by their technology providers whose infrastructure, models and algorithms influence pricing, product design, service quality and access. While reliance on third-party AI providers raises familiar operational risks, including outsourcing considerations, firms will increasingly also need to consider the impacts on customer outcomes.
AI-driven advice and intermediation and the limits of the FCA’s regulatory perimeter
The Engagement Paper signals the FCA’s awareness of the limits of its regulatory perimeter, particularly regarding AI-driven advice and intermediation. The FCA notes the potential emergence of consumer harm from reliance on unregulated AI for guidance or advice. Indeed, with the growing capabilities of AI, especially agentic systems, it is plausible that retail consumers will increasingly rely on AI for investment advice, personal recommendations, direct investment transactions or portfolio management.
The FCA is therefore seeking views on whether AI systems could provide services functionally equivalent to regulated activities, such as advice or intermediation, while remaining outside the regulatory perimeter. It is interested in how this might occur across different segments of retail finance and what proportion of value could migrate to unregulated services. Stakeholders’ responses could potentially inform the FCA’s ongoing Advice and Guidance Boundary Review, with the FCA planning to consult on simplifying and consolidating guidance and rules around investment advice in early 2026 (see our initial report on the Review here) following on from its recent publication of near-final rules for targeted support (see our report here). The regulator is also exploring parallels with mobile wallets, where value can be captured without becoming a regulated provider.
For unregulated companies, this underscores the need to monitor developments in the AI-driven advice and intermediation space, as we could see expansions or clarifications of the regulatory perimeter and the regulator’s expectations. Business models designed around regulatory arbitrage may face increasing scrutiny, even where authorisations are not currently required.
Adapting the regulatory approach for an AI-enabled future
The Engagement Paper confirms that the FCA does not plan to introduce new prescriptive rules or rewrite existing frameworks to regulate the use of AI. Instead, it will consider how current outcomes-based frameworks, including the Consumer Duty, SM&CR and Critical Third Parties, may need to be adapted as AI changes the pace, scale and nature of markets, firms and consumer experiences. The regulator indicates that its approach will be informed by the various 2030 scenarios it has envisioned across technology, competition and consumer trends.
For example, the FCA will assess how senior managers under SM&CR can continue to discharge responsibilities for AI deployment and maintenance, and how these responsibilities might need to be adapted under different future scenarios. It will also explore how existing consumer protection rules, including vulnerability guidance under the Consumer Duty, may be affected by AI. While AI itself has the potential to support financially vulnerable consumers, it could also create new ways for firms, or intermediaries, to target vulnerable groups. Similarly, the trend toward hyper-personalisation raises questions about how regulatory expectations will apply in an AI-enabled context. We expect the FCA’s focus will be on fairness, transparency and suitability.
The FCA also wants to optimise its regulatory approach to ensure it is ready for the potentially transformational changes that AI may bring. New and evolving risks may include more sophisticated forms of fraud, manipulation and financial crime, posing further challenges for firms and regulators when seeking to prevent, detect and mitigate harms. This could impact the regulator’s balance between preventative and reactive supervision, with the Review signalling a need to consider the consequences for the FCA’s enforcement toolkit in light of the potential for harms to scale rapidly, involve autonomous systems or involve complex technical facts.
Looking ahead, the FCA aims to balance support for innovation with mitigation of emerging risks, including AI-powered fraud, autonomous social engineering and identity compromise. This may lead to heightened expectations on firms regarding accountability, auditability and the safe deployment of high-risk AI systems.
Next steps
The FCA has invited input from stakeholders, with responses due by 24 February 2026. Sheldon Mills is expected to report to the FCA Board in the summer, setting out recommendations based on the engagement. This will culminate in an external publication to support further informed debate.
With technological advancements moving at pace, the Mills Review cannot provide certainty about the speed or direction of AI developments. However, the aim of the Review is to develop recommendations that can provide a clear path for the FCA to remain prepared, adaptive and able to support an innovative UK financial services sector. Key to this will be ensuring that markets work well and that consumers are sufficiently protected. In this context, it will be important for firms and other stakeholders to consider the themes of the Review carefully and provide relevant evidence and input to inform the direction of the debate and the future regulatory approach.
