Part of the Ghosts in the Machine b:INFORM Series

Artificial Intelligence – Regulatory Risk

In this article forming part of our b:INFORM Ghosts in the Machine Series , we analyse the survey findings relating to regulatory risks arising from the use of Artificial Intelligence (“AI”) in financial markets and institutions. Click here for our article analysing the survey findings relating to legal risks arising from the use of Artificial Intelligence. And you are welcome to visit our Ghosts in the Machine website or download a PDF copy of the survey report. We also invite you to visit our FinTech website.

More regulation needed

Key finding: A total of 60 % of all survey respondents and 72 % of survey respondents occupying a legal/ compliance function expressed the view that more, and perhaps much more, regulation would be required in response to AI. 

AI is at the cutting edge of the financial services evolution. It has numerous potential applications for both consumer/retail and corporate/ wholesale. It will be intrinsically linked to other cutting edge innovations such as robotics (bots) which will be harnessed in myriad ways and areas such as providing wealth management and insurance advice. Existing regulatory frameworks were not designed with AI in mind and, while at a high level a principles based approach would capture AI, the coverage is not ideal. Hence, the need for a global approach to new regulation which should be adopted after international consultation and be consistent across markets.

Regulators not on top of AI

Key finding: A separate finding was that 75 % of all survey respondents and 64% of survey respondents occupying a legal/ compliance function felt that regulators are not keeping pace with advances in technology.

This finding is not surprising.  The World Economic Forum Report from 19 April 2016 and entitled “The Role of Financial Services in Society; Understanding the impact of technology enabled innovation in financial stability” suggests that steps need to be taken to ensure national supervisors are well equipped to monitor and mitigate against risks from technology enabled innovation. Indeed, in private conversations many national regulators admit that they fear falling behind. Regulators do not have the same level of resources as the private sector which is investing billions of dollars each year and recruiting the top scientific brains to develop AI. Consequently, there is a huge imbalance between the regulated and regulators. It is therefore incumbent on FIs as a sector to proactively keep regulatory bodies informed of developments and applications. If the FI sector fails to do this it leaves itself vulnerable to unnecessary rule changes that prohibit or restrict AI application. This should not be confused with the parallel debate on whether FIs should share highly confidential proprietary technology information on the AI itself they use, as some regulators and politicians now propose. 

Regulatory focus

The survey also explored how regulators should harness the application of AI for the purposes of reducing regulatory risks. Respondents were asked to name an area which they would like regulators to focus on in terms of adoption of AI. 

Key finding: Of all survey respondents, 49% thought that regulators should prioritise the adoption of AI in the area of Market Abuse in order to reduce regulatory risks, whereas 38% chose the area of AML/ KYC as a regulator priority. Interestingly, survey respondents occupying a legal/ compliance function applied the reverse prioritisation with 40% suggesting regulators should focus their AI efforts on AML/KYC and 28% proposing a focus on Market Abuse. 

This focus on AML/ KYC by the legal and compliance community may well be the result of the prominent attention AML and KYC have recently received from Transparency International and FATF, especially in conjunction with anti-terrorist financing and the global initiatives on curbing corruption. A separate point is that the survey anticipates that AI will strengthen the ability and capability of the regulators to actively regulate areas of the market which have historically proven challenging, but remain in the spotlight.


The FI sector has the challenge of ensuring that the regulators are kept up to speed in the fast-changing world of AI technology and that the global regulatory community provides a coordinated and consistent response. It is in the interest of the FI sector as a whole that a broad public-private initiative emerges as part of which FIs provide sufficient data and assistance to regulators to empower them to educate themselves and properly understand the issues around AI at both national and international level. Otherwise, the sector may well face inconsistent rules between jurisdictions and unnecessary curbs on AI applications. FIs should also anticipate that regulators themselves will adopt AI for suitable areas such as AML and Market Abuse. 

Contributor: David Brimacombe