Many firms are already using AI and the pace of innovation is such that the regulator is finding it challenging to stay ahead of AI developments.
The Financial Conduct Authority (FCA) has recently released an AI Update, which provides insights and guidelines for mortgage brokers. Based on this document, here’s a summary of what you need to consider to ensure compliance and leverage AI responsibly and effectively.
The FCA emphasises the transformative potential of AI in financial services. AI can streamline operations, enhance customer service, and support the advice process. However, alongside these benefits, they highlight the importance of adopting AI in a manner that ensures consumer protection, market integrity and effective competition.
The FCA’s regulatory framework is technology-agnostic, focusing on principles and outcomes rather than prescribing specific technologies. This allows for flexibility and innovation while ensuring that the adoption of AI aligns with regulatory objectives. Mortgage brokers should ensure that AI systems are integrated in a way that aligns with the FCA's principles of fairness, transparency, and accountability.
It can be difficult working within a principles based approach as clear and direct rules are much easier to interpret. With AI innovation moving at such a pace it’s understandable that the FCA takes this approach but it doesn’t make interpreting the guidance easier.
AI systems must operate safely and securely throughout their lifecycle. This includes continuous risk identification, mitigation, and management.
Where you use AI, ensure that the systems and tools for managing them are robust, with strong security measures and controls in place to protect against operational disruptions.
AI should not lead to unfair outcomes or discriminate against individuals. Mortgage brokers must consider the ethical implications of AI, ensuring that it does not perpetuate biases or create disparities in your service provision.
The FCA’s Consumer Duty requires firms to act in the best interests of customers, which includes designing AI systems that are fair and transparent.
The risk of bias in Large Language Models (LLMs) is a concern, as these models can inadvertently amplify existing biases present in their training data. The FCA are worried that this could lead to unfair outcomes, such as discriminatory advice practices or exclusion of certain customer groups.
The FCA mandates that AI systems should be explainable. This means that decisions made by AI, especially those affecting customers, should be understandable and justifiable.
Brokers must ensure that they can explain how their AI systems work and the basis of any decisions made by these systems to both regulators and customers. This may not be easy due to the idea that AI is a ‘black box’.
The black box theory refers to the challenge of understanding how AI systems, particularly complex models like neural networks, arrive at their decisions. This opacity can pose significant risks, especially in relation to advice and lending, where not knowing the rationale behind AI-generated decisions can lead to compliance issues and undermine trust.
Clear governance structures should be in place to oversee the use of AI. Senior management must be accountable for the deployment and operation of AI systems.
The FCA’s Senior Managers and Certification Regime (SM&CR) underscores the importance of accountability in managing AI risks.
Both small and large businesses need to manage the risks associated with AI, but their approaches may differ significantly. Larger businesses typically have more resources to invest in comprehensive risk management frameworks, including dedicated AI governance teams and advanced technological solutions for monitoring and controlling AI systems. They are likely to have in-house expertise to conduct extensive risk assessments and implement robust security measures.
Conversely, small businesses may need to adopt a more focused approach, leveraging third-party solutions and seeking external expertise to manage AI risks effectively. They might prioritise specific areas such as ensuring compliance with transparency requirements and focusing on mitigating immediate operational risks. For both types of businesses, aligning AI use with the FCA’s guidelines on fairness, transparency, and accountability remains crucial.
The FCA is committed to fostering a pro-innovation environment while ensuring robust regulatory oversight. This includes:
1. Assess your AI systems: Conduct a thorough review of your AI systems to ensure they comply with FCA guidelines on safety, fairness, and transparency.
2. Enhance governance: Strengthen your governance frameworks to ensure clear accountability for AI use within your firm. Consider making an individual responsible for this within your firm.
3. Stay informed: Keep abreast of ongoing regulatory developments and participate in industry discussions on AI to stay ahead of compliance requirements.
Integrating AI into your business presents significant opportunities for innovation and efficiency. It’s important to align with the FCA’s guidelines to ensure that you can harness the power of AI while ensuring compliance and safeguarding customer interests. As the regulatory landscape evolves, staying informed and adaptable will be key to leveraging AI successfully in your business.
Click here to view the FCA AI update publication
Learn more: A Broker’s guide to AI
Learn more: Choosing the right AI tools for you
Learn more: How mortgage brokers can use AI with social media