A 2023 survey by the European Central Bank (ECB) revealed that 60% of the 105 major European banks are already utilising AI, with additional applications under development. Similarly, the adoption of AI is gaining momentum in the insurance sector. In the United States, a 2023 survey reported that 88% of car insurance companies and 58% of life insurance companies are either using or planning to implement AI technologies. The European Insurance and Occupational Pensions Authority (EIOPA) launched a review in 2018, which revealed that 31% of the European insurance firms included in the report were using AI and 24% were using it at a “proof of concept” stage. As for the capital markets, AI is prominently employed through machine learning models that drive algorithmic trading.
These developments were reason for the Dutch Central Bank (DCB) and the Dutch Authority for the Financial Markets (AFM) to publish a report in 2024 on the impact of AI on the financial sector and its supervision. The DCB noted that Dutch financial institutions have been utilising AI for some time and are now experimenting with more advanced models, indicating that the adoption of AI is likely to expand further in the coming years. The DCB highlights that AI is being applied by Dutch financial institutions in areas such as fraud prevention and detection, combating money laundering, terrorism financing, and cybercrime, as well as credit assessments and identity verification. Additionally, AI supports employees in working more efficiently. While many institutions remain cautious about adopting generative AI for the time being, they recognise its potential and are gradually integrating it into supporting processes.
Are financial institutions using AI when performing regulated activities?
The activities of financial institutions, such as insurers, are restricted when it comes to utilising artificial intelligence (AI) systems. They must comply with existing legislative framework, such as the Solvency II Directive, the Insurance Distribution Directive (IDD), the General Data Protection Regulation (GDPR) and the e-privacy Directive (ePD).
Financial institutions use AI throughout the running of their businesses in various operational processes and to help improve the products that they offer to consumers. AI has not yet found widespread use as a tool to facilitate activities that directly affect customers, such as credit scoring and underwriting. Some examples of current uses of AI systems include:
- Use of AI for fraud detection and risk assessments;
- Implementation of AI to monitor transactions in real-time; and
- Use of AI in its car insurance to monitor customers’ driving behaviour and offer monthly discounts based on their risk scores.
Although the use of AI systems is allowed, transparency is a key element of its proper use, and it is important that financial institutions properly explain, what they are offering, in order to build trust with the consumers, who can then make their own informed decisions.
When it comes to AI, it is also important to go beyond merely complying with the legislation. Ethical implications and ‘public good’ must be taken into consideration when using these systems and technologies. Financial institutions should ensure that the quality of their products and services remains uncompromised and that the use of AI does not lead to reputational harm, such as being perceived as unethical. Financial institutions must only implement ethical and trustworthy AI systems into their business frameworks. Furthermore, these institutions should exercise caution to avoid becoming overly reliant on AI systems and must develop contingency plans to address potential incidents or crises.
What are the key considerations when using AI to perform client due diligence (CDD) or transaction monitoring?
The use of AI can involve risks, for example when it cannot be confirmed how the AI system reached a certain point or conclusion, and maybe creating or presenting unreliable data or advice. It is important to be aware of these risks and systems should be implemented within the business framework to avoid these as much as possible. Some examples of these include:
- Client due diligence (CDD) – AI can be used as a timesaver when it comes to performing client due diligence, for example by organizing documents and providing quick and effective search functions. However, it is still necessary for a human to go through the AI’s output, to interpret it and verify its accuracy.
- Transaction monitoring – Where there are many transactions involved with varying levels of complexity, it is good practice to use AI and other models to monitor these transactions, in order to prevent potential money laundering and terrorist financing. It is good practice for firms to use various alert generation methods when monitoring transactions. A task that would otherwise be a large time drain and ineffective use of resources, can instead be done using AI and other models to track and investigate unusual patterns and complex transactions.
Where financial institutions use AI solutions from third-party service providers, other attention points may also be relevant, such as the Digital Operational Resilience Act (DORA), which comes fully into effect on 17 January 2025.
What is the impact of the AI Act on financial institutions?
The new EU AI Act, of which the first components will come into effect on 2 February 2025, will have significant implications on the way that AI is used by financial institutions. Please refer to our earlier news update on the EU AI Act for further background.
The AI Act entered into force on 1 August 2024 and will be applicable in stages.
- General provision and bans on prohibited practices will apply as of 2 February 2025;
- General-purpose AI rules including governance will apply as of 2 August 2025;
- Obligations for high-risk systems will apply as of 2 August 2027.
AI systems that are considered high-risk will be subject to more stringent requirements. The high-risk qualification could be particularly relevant in the financial sector, when AI systems are for instance utilized in customer identification, human resources, lending, as well as the risk assessment and pricing of life and health insurance. This would for instance apply to an AI system designed to assess the creditworthiness of customers. If the AI system in question qualifies as high-risk, several requirements must be met. Some examples include putting in place a quality management system, keeping certain documentation and logs, performing a conformity assessment, requiring registration, and complying with accessibility requirements.
Financial institutions may qualify as deployers of high-risk AI systems. For example, a fund manager who procures an AI system from an external party to evaluate employee performance and deploys it within their organization would fall under this scope. Alternatively, a lender that develops and uses an AI system to assess the creditworthiness of customers and subsequently uses it internally may qualify as a provider. A life insurer that acquires an AI system from an external developer to assess individual risks and utilizes it under its own (brand) name will, whilst being a deployer, also be considered to be a provider. Dependent on the qualification, financial institutions are subject to rules for deployers, such as operation monitoring and human oversight, or providers, such as conformity assessments and registration obligations.
There is also a residual category of AI systems with low risk. Numerous potential applications of low-risk AI systems exist in the financial sector. Such systems can support financial institutions by analyzing large-scale data from a wide range of sources. One example of an application is algorithmic trading, where the AI system analyzes market trends or other signals and uses the results as input for the trading algorithm. Codes of conduct will be developed for this purpose, which can be voluntarily adopted. However, the extent to which these codes will truly remain voluntary will depend on the approach taken by regulators. The more these codes of conduct are tailored to the financial sector, the more they will establish expectations that financial enterprises are expected to meet.
In conclusion, financial institutions must adapt their existing AI systems to ensure compliance with the new AI Act. Where AI is applied in financial services, the underlying principle of the AI regulation is that existing financial regulators, such as the AFM and DNB in the Netherlands, will also oversee compliance with the regulation.