The use of Artificial Intelligence (AI) can, inadvertently, give rise to issues relating to data protection compliance and equality law. However, used properly, it also provides a unique opportunity to combat implicit systematic discrimination. The new EU AI Act supports such an optimistic approach towards AI.
Discrimination through non-automated processes
In the public discourse on AI and the associated risks of discrimination, it is often overlooked that human decisions could be unconsciously based on non-objective criteria.
As a result, forms of implicit systematic discrimination can occur in the course of workforce decision making. For example, HR decisions made “on instinct” may be based on ill-considered preferences. Research has shown that individuals tend to attribute more skills to people they find attractive. This and other unconscious biases can trigger discriminatory results. Furthermore, irrelevant criteria such a person’s own current state of wellbeing can also impact decision-making.
AI as an opportunity in the fight against discrimination
Although there is a risk that AI-powered automation of decision-making processes could lead to the proliferation of discriminatory effects if the AI systems are trained with data from previous biased human decisions, this can be mitigated through appropriate quality control measures. These involve assessing the training data for discrimination and bias and taking remedial action to address this where required.
Implementing these measures, however, will often require collecting sensitive data about the data subjects whose data is part of the training data, for example, data about an individual’s religious affiliation or sexual orientation. The collection of any such personal data will need to comply with the EU General Data Protection Regulation (GDPR) and there are more stringent obligations for processing sensitive personal data.
AI Act introduces a paradigm shift
With the AI Act, the EU legislator has recognized for the first time that this challenge needs to be addressed . It expressly permits the use of sensitive data insofar as this is absolutely necessary to counteract discriminatory bias in high-risk AI systems. These high-risk AI systems include, in particular, AI systems that support work-related decisions, e.g., by evaluating employees.
The AI Act permits the processing of sensitive data under strict additional conditions. These conditions prioritize protecting data subjects’ interests under data protection law.
Specifically:
- The use of sensitive data must be necessary for the detection and correction of biases and only where other types of data, including synthetic or anonymized data, is not sufficient for this purpose.
- Furthermore, sensitive data must be protected by the highest security measures. This includes strict access control and documentation of all accesses.
- In addition, the sensitive data must be pseudonymized so that the data subjects cannot be immediately identified.
- Finally, the sensitive data must not be transferred to any party that is not an authorised person who is subject to the appropriate confidentiality obligations and the sensitive data must be deleted as soon as they are no longer necessary for the detection and correction of biases.
All of this must be documented appropriately. This paradigm shift is an expression of a fundamental optimism that our social reality can be improved in a sustainable manner through properly regulated AI.
This is the first post of a three-part blog series. Click to view our second post GDPR compliance and inclusion: striking the right balance, and look out for our final post on The protection of gender identity under the GDPR next week.