The concept of AI is generally attributed to computer scientist John McCarthy, back in the ’50ies. It is extremely broad and complex, yet it may be summarised (at the risk of oversimplification) as machine intelligence designed to perform a defined set of actions and to learn from experience. 

AI is nowadays part of our lives, often without us noticing. According to Ray Kurzweil’s fascinating theory “The Singularity is Near“, the time when advanced AI will really encroach on the human world is soon-to-come. He predicts that one day AI will not just equal but overtake and substitute human intelligence. 

The financial sector is one sector already highly impacted by AI as explored in detail in our recent Ghosts in the Machine Survey. Survey respondents expressed strong concerns regarding legal and regulatory risks arising out of the use of AI. No doubt, the use of AI AI poses significant legal and regulatory risks from a data privacy and data security perspective and financial institutions (“FIs”) will need to ensure their use of AI is compliant with data privacy and security legislation and regulation. 

A glance to Europe

In the following, using the incoming European General Data Protection Regulation (“GDPR”) as an example, we highlight three data privacy/ security requirements that FIs should carefully consider before embracing AI based solutions. Needless to say that this list is not comprehensive.

Firstly, data subjects have a right not to be subject to decisions based solely on automated processing to the extent those decisions legally or similarly affect them. Exemptions apply, but this prohibition will likely require consideration in many instances such as the use of AI for credit scoring purposes.  

Another challenge will be honoring individuals’ privacy rights in light of the volume of information that may be collected through AI. Traditionally, FIs (together with large scale retailers) seek to collect as much personal information as possible and store it forever. This tendency will further increase with the growing use of AI-based solutions. Handling requests from individuals to access, delete or rectify data and comprehensively informing them about data collection practices will become more difficult and FIs will need to implement solutions enabling them to handle such requests in a privacy compliant way and adequately inform individuals. 

FIs would also be wise to carefully consider the stringent GDPR rules on data protection by design/by default. These require data controllers to (1) embed data protection into the design specifications of technology from the outset as opposed to adding them on as a last-minute thought (data protection by design) and (2), by default, only process personal data necessary for a specific and duly identified purpose (data protection by default).

Regulation may help (?) 

The reality is that GDPR is a sort of storm in the privacy world and a tsunami in the technology driven ecosystem – for example for the IoT dimension, which ultimately for many aspects represent a declination of AI. A sound majority of our survey respondents expressed the view that (much) more regulation would be required to address AI. Well, from a data protection perspective, the GDPR has just been passed and will be enforceable in about two years from now. Adding new pieces of legislation on top of existing laws not yet fully enforced may blow up the privacy ecosystem. Not to mention the fact that under the GDPR, country specific legal peculiarities will remain and further complicate compliance.   
FIs face the difficult challenge of smoothing the tensions between AI and data protection and finding a way to combine appetite for AI with the GDPR and, more generally, data privacy laws worldwide. Given the significant fines imposable under the GDPR, compliance is a ‘must’ and not just a matter of risk management. The principle of data protection by design, conceived precisely to address the risks triggered by an increase in use of technology, may be a good starting point, meaning that FIs should start their AI journey with a privacy oriented mindset.

 

Author

Francesca Gaudino is the Head of Baker McKenzie’s Information Technology & Communications Group in Milan. She focuses on data protection and security, advising particularly on legal issues that arise in the use of cutting edge technology.