Artificial Intelligence is increasingly becoming a part of how advisers work, with around half of UK firms already utilising some form of AI. For those who haven’t yet gone down this road, the biggest concern is usually around regulation and whether the FCA will have an issue with it.

The good news is the FCA isn’t trying to block AI adoption, but it does expect firms to treat AI in the same way they treat any other tool that impacts clients, advice processes or operational resilience. Below I give a breakdown of what the FCA expects and what firms should be considering now.

The FCA’s position on AI

The FCA has stated it is technology‑agnostic – meaning it doesn’t create rules for each type of tool. Instead, it expects firms to apply the existing rules to these tools:

  • Consumer Duty – does the tool align with the Consumer Duty and good customer outcomes?
  • Senior Managers and Certification Regime (SM&CR)  – is accountability clearly assigned to a Senior Manager?
  • Senior Management Arrangement, Systems and Controls (SYSC)  – are the appropriate governance, oversight, and resilience controls in place?

There is currently no standalone AI rulebook; However the regulator is closely monitoring AI roll-out through, for example its AI Lab and live testing. Instead, the FCA's focus is on outcomes, fairness, accountability and oversight.

What this means for advisers

If AI is being used in a way that it influences advice, suitability, research, portfolio construction, operations or client communications, the FCA expects you to demonstrate:

  • How the tool supports Consumer Duty outcomes.
  • That proper governance and controls sit around it.
  • That a named Senior Manager is responsible for the risks.

Key risks advisers need to be aware of

Black box risk – some AI tools operate in ways that can’t be easily explained. This becomes a problem if the output affects a client, as advisers must be able to understand and explain why a tool produced a recommendation and be able to challenge and override it when needed.

Hallucinations and accuracy – AI tools, especially large language models, can produce confident‑sounding but incorrect outputs. These outputs may look correct at first, but on closer inspection you can tell they are wrong or potentially made up (known as hallucinations). It’s important to keep a human in the loop (especially for anything client‑facing) to avoid incorrect outputs being used.

Fairness and bias – AI tools are only as good as the data they are trained on and use. If the data behind a tool contains bias, the outputs will likely reinforce and potentially exacerbate this bias. Firms are expected to test for unfair outcomes and show how they monitor and correct these.

Outsourcing and third‑party dependence – bringing in an external AI provider does not remove accountability from the firm. You’re still responsible for due diligence, contractual controls and ongoing oversight.

Practical steps for advice firms

Before adoption

  • Be clear on the problem you’re looking to solve – define what you’re trying to improve, which clients the problem affects and how you’ll measure outcomes.
     
  • Set ownership under SM&CR – name the Senior Manager responsible and reflect this in your governance map and risk register.
     
  • Carry out a proportionate risk assessment – look at risks around data, model performance, operational resilience, bias and explainability, and record how these risks will be managed.
     
  • Think about data protection early on – ensure the tool aligns with GDPR on purpose, lawful basis, minimisation, retention, storage and access.

During deployment

  • Start with a pilot – use a limited dataset and small user group to test for accuracy, fairness and reliability. Document all testing, and any issues, changes or fixes needed.
     
  • Keep human oversight – all client‑facing content must be reviewed before use, and you should keep an audit trail of where you have had to override the tool.
     
  • Explainability and disclosure – staff should understand how the tool works, what its limitations are, and how to explain it to clients where required.

Ongoing oversight

  • Monitor performance and drift – track accuracy, error rates and alignment with your KPIs. Maintain version control and records of any changes.
     
  • Test fairness regularly – check outcomes across different client profiles to ensure no bias is developing.
     
  • Review operational resilience – know your third‑party dependencies, concentration risks and fallback plans.
     
  • Document everything – keep logs of usage, decisions, reviews and overrides.

Due diligence checklist for AI vendors

Your normal due diligence criteria will apply for AI tools, but you will likely want to enhance your process to include:

Model design and performance

  • What type of model is being used – there are different types of AI models a company could use, and understanding this can help identify the potential risks.
  • Performance and error‑rate evidence.
  • How explainability is delivered.

Data and training

  • What data the model is trained on and confirmation it is lawfully sourced.
  • Whether your inputs are used for training the model.
  • How bias is assessed and addressed on an ongoing basis.

Security and privacy

  • Full data flows, storage locations, access controls and encryption.
  • Support for your Data Protection Impact Assessment (DPIA) and GDPR requirements.

Operational service

  • Service Level Agreements (SLAs), incident response, update schedules and notification commitments.

Auditability

  • Access to audit logs and decision‑level traceability.

Contract terms

  • Ownership of inputs/outputs – including any intellectual property.
  • Controls on vendor reuse of client data and confidentiality.
  • Warranties, indemnities and sensible liability caps.

Data Protection and privacy

Treat AI like any other processing activity under GDPR.

  • Confirm if client data is anonymized.
  • Confirm data storage location and if there are any cross-border transfers.
  • Align your privacy notices and client terms as needed.

Liability, client communication and trust

Advice liability will ultimately still sit with the firm and the adviser, even if AI is used. It’s important to be able to explain to clients how AI is used, and ensure human review is always applied.

Whilst there is no specific legislation from the FCA as yet, it continues to monitor adoption across firms, and it’s likely this will be used to inform any further guidance, expectations or regulation. It’s therefore important to stay up to date with any FCA announcements, have oversight and records of how you use AI tools, and be able to adapt as needed to ensure you meet their expectations.

Latest articles

The LTA replacement regime explained

The new regime came into force on 6 April 2024 and introduced three new allow…


Paul Squirrell

Paul Squirrell

Head of Retirement and Savings Development

How does the UK State Pension measure up globally?

Comparing the generosity and the cost of State Pension provision globally, hi…


Marianna Hunt

Marianna Hunt

Fidelity International

Pensions: The regulatory runway

What pension changes are coming?


Marianna Hunt

Marianna Hunt

Fidelity International