Part of the Ghosts in the Machine b:INFORM Series

Artificial Intelligence – Legal Risk

In this article forming part of our b:INFORM Ghosts in the Machine Series, we analyse the survey findings relating to legal risks arising from the use of Artificial Intelligence (“AI”) in financial markets and institutions. Click here for our article analysing the survey findings relating to regulatory risks arising from the use of AI. And you are welcome to visit our Ghosts in the Machine website or download a PDF copy of the survey report. We also invite you to visit our FinTech website.

Key Finding: A total of 44 % of all survey respondents and 56 % of survey respondents with a legal background felt that their businesses (not third party businesses) did not fully understand the legal risks associated with AI.

Analysis

This is a remarkable finding as it appears to clash with financial institution’s (“FI”) approach to legal risk. FIs have low appetite for conducting any business or activity with high legal risk, and both numbers – 44% and 56% – are off the radar from the perspective of acceptable risk levels.

Legal risk appetite in FIs is at an all time low. The financial crisis followed by the wave of civil litigation and regulatory enforcement action has resulted in the primacy of effective risk mitigation and certainty over the uncertainty of unknown legal risk, subject to the caveat that legal risk has to be managed as it can never be eliminated. That said, FIs have to innovate and radically change themselves on multiple dimensions if they are to flourish in the new world order of ultra competitive banking services provided by the new digital and FinTech market entrants. Consequently, there is a tension between commercially imperative evolution and the comfort of legal certainty; the development and implementation of AI needs to find the right balance between the two. The survey suggests that the current balance is weighed too heavily in favour of adopting AI even though the legal risks associated with AI are not yet fully understood. If this is the case, it could be seen as a relatively high risk approach and the balance should be reset to one where legal risks are identified and mitigated before unexpected problems arise. 

To Dos for FIs

FIs need to fully integrate their legal department experts in the development of AI so that the legal risks inherent in the emerging applications can be identified, measured and mitigated. In the past, legal teams were often involved too late in the process and the opportunity to input during the design phase was missed. This can lead to wasted time, effort and cost as well as inhibit the speed to market.  The legal experts supporting the AI teams must approach the subject thinking both horizontally and vertically to ensure all risks have been identified.

Three questions legal teams should ask themselves are:

  1. Is the legal team sufficiently closely involved in the development of AI?
  2. What is the process for identifying the possible legal risks? Is it bottom up (from potential users) as well as top down? Does it think outside the box and across the breadth of the commercial application, scanning the breadth of the horizon? A narrow siloed approach will be too restrictive.
  3. What is the optimum way to measure and then mitigate the identified risks so that the commercial benefits can be achieved within the risk appetite of the firm?

Outlook

The message from the survey is clear: the legal risks of AI are not yet sufficiently understood by organisations. To fix this deficit, FIs’ legal teams need to proactively engage with the AI developers and business planners in order to identify legal risks at an early stage and implement effective risk mitigation strategies. Otherwise, avoidable legal risks will be run leading to liability issues.

Contributor:  David Brimacombe

Author