The European Union’s draft AI Act is an ‘early-mover’ in the arms race towards a global blueprint for AI regulation. In December 2022, the European Council approved a compromise version of the AI Act, and next month, the European Parliament is scheduled to vote on the draft text. But despite its initial promise, the AI Act increasingly resembles the circumstances of its conception – a complex, one-time political compromise between thousands of MEPs with wildly differing views. What’s more, by adopting the infrastructure of the New Legislative Framework (NLF), the AI Act leaves gaps in the protection of individuals. This means that as your organisation develops an ethical AI framework, the AI Act will only be one part of the jigsaw in establishing AI-specific risk mitigation measures.

What does this mean if your organisation aspires to be an ethical adopter of AI systems?

The AI Act marks a pivotal point: a key opportunity to build the foundations for compliance going forward. But the AI Act has blind spots, and demonstrating compliance with the AI Act alone will not be enough to ensure responsible deployment of AI systems. To truly mitigate risk, we recommend that organisations:

  1. Acknowledge that the AI Act is ultimately positioned as EU product regulation. Leverage your organisation’s existing product safety infrastructure and resources to develop compliance.
  2. Consider supplementing your organisation’s AI Act compliance programme with a broader AI ethics framework. This framework should be built around an organisation’s core principles, ethical concerns, data protection principles and safeguarding fundamental rights.
  3. Carry out an initial assessment to identify where AI systems are being deployed or developed within your organisation. Use a “wide lens” to assess the level of risk in these systems – risk categorisation should not be based solely on the risk categories described in the AI Act.
  4. Carry out an in-depth analysis of your AI supply chains. Your organisation’s regulatory obligations hinge on applying concepts that do not easily fit into the reality of AI supply chains, so this will likely be an essential (but complex) analysis.
  5. Consider the impact on individuals, including in connection with transparency. This is a blind spot in the AI Act.

We explore four key shortcomings of the AI Act below, and the actions we’ve been discussing with our clients in light of these. If you’d like to discuss this further, please get in touch.

  1. Understanding the AI Act as product regulation (rather than the sequel to the GDPR).

It is all too easy to assume that the AI Act will be similar to the GDPR – both are concerned with big data, and the impact of its use on the fundamental rights of EU citizens. But in drafting the AI Act, EU legislators have shunned the GDPR’s legislative structure for something entirely different – the AI Act adopts the New Legislative Framework (NLF), the EU’s product safety regulation model. This means it follows the legislative structure and core concepts of the EU Medical Device Regulations 2017/745 (EU MDR), the Toy Safety Directive 2009/48 and other product safety regimes.

What does this mean for your organisation? Unlike the GDPR, the AI Act is not concerned with data subject rights or transparency for individuals. Instead, it centres around the lifecycle of the product itself (here, an AI system). The AI Act tracks against this lifecycle, imposing obligations at all stages from training, testing and validation; to risk management systems; conformity assessments; quality management systems; and post-market monitoring. In building a compliance programme for “high-risk systems”, it will be essential to leverage the expertise of regulatory specialists who are familiar with these concepts and documents such as the Blue Guide. Where third party conformity assessment is required, you’ll likely need to involve specialists familiar with notified bodies.

  1. The AI Act categorises and restricts AI use cases according to level of risk (or so it says).

The AI Act categorises AI systems as: (a) prohibited, (b) high-risk or (c) limited risk, but the principles on which these value-based judgments are made are not clear. It is at least questionable as to whether these categories truly correspond to level of risk. Instead, the categories seem to reflect complex political compromises, crystallised at a single point in time.

As a result, the AI Act effectively relies on the private and public sector to police itself in a number of contentious consumer-facing use cases for AI that are not categorised as “high-risk” (at least in the latest draft). By failing to adopt a transparent, principle-based approach to categorising risk, the EU is handing over significant power and influence to the private sector over these AI systems.

The AI Act’s categorisation system reflects the following tiers:

  • Limited risk: There are some minor transparency provisions for what the AI Act calls “limited risk” systems (Article 52). These are systems that interact with individuals (such as chat bots); biometric categorisation systems; deep fakes; and emotion recognition systems. It is not clear why the AI Act sees these as lower risk and in any case, these systems have complex ethical implications. There are unanswered questions as to whether deep fakes and biometric categorisation systems are harmful and potentially discriminatory, and emotion recognition systems are of dubious scientific basis. If deployed, these systems have real-world implications for trustworthiness and public perception. Deeming these systems to be “limited risk” seems premature.
  • High-risk: The substance of the AI Act focuses on regulating “high-risk” systems, which are subject to a detailed product safety regime. However, this does not involve a genuine assessment of risk by organisations. Instead, Article 6 sets out a defined list of high-risk AI systems, including (to take a few examples):
    • remote biometric identification systems.
    • AI systems that determine access to education.
    • AI systems used for HR purposes, such as recruitment, promotion, or termination, allocation of tasks, monitoring or evaluating employees.
    • law enforcement; migration, asylum and border control; and administration of justice.
    • systems required to undergo third party conformity assessment under one of the existing regimes listed in Annex II (Article 6(1)). These regimes include the EU MDR and the In Vitro Diagnostic Medical Devices Regulation (2017/746 (IVDR). Disappointingly, this blanket provision means that in practice, almost all AI as a Medical Device (AIaMD) is considered “high-risk” for the purposes of the AI Act, even if the device is not in the highest risk classes for the purposes of the EU MDR. This is because, as software, AIaMDs are predominantly Class IIa or higher under the EU MDR (Rule 11, Annex VIII, Chapter III), they require third party conformity assessment. However, there is a vast spectrum of AIaMD products. It seems short-sighted to deem these all to be “high-risk” under the AI Act, particularly given the enhanced (and sometimes duplicative) requirements imposed by both the AI Act and EU MDR operating in parallel.
  • Prohibited: Article 5 lists prohibited use cases, which (for most companies) should be of rhetorical effect. These prohibited use cases include:
    • social scoring.
    • subliminally manipulating the behaviour of individuals in a manner that causes harm.
    • exploiting the vulnerabilities of a specific group to manipulate their behaviour in a manner that causes harm.
    • facial recognition for law enforcement purposes (although there are exceptions to this).

Whilst these use cases are clearly undesirable, some merit further consideration – for example, why is only “subliminal” manipulation captured?

What does this mean for your organisation? As a first step in building an AI compliance programme, organisations should take stock of the AI systems  deployed and / or developed within their organisation (across functions), and carry out a risk assessment of each system. But in assessing risk, organisations should not rely on the risk classification suggested in the AI Act alone. The draft AI Act does not address AI systems that we come into contact with on a daily basis, including search engines and profiling for targeted advertising. Further, you will need to use a wider lens to assess risk, and consider (for example) the implications for fundamental rights of individuals, including: dignity, freedom of thought, equality, and privacy; the risk of structural and individual discrimination; data privacy; health and safety; and public perception / the views and wishes of individuals.

  1. Consumers have no (or an extremely limited) role under the AI Act

The AI Act has a key shortcoming – consumers have no rights, no recourse and effectively no role under the AI Act. This may seem bizarre for a piece of legislation that purports to safeguard the fundamental rights of EU citizens, but it is a consequence of adopting the NLF. The NLF’s compliance obligations focus on the actors that place AI systems on the EU market (or put these into service), rather than those that may be affected in the real world by the output of AI systems (i.e. the equivalent of “data subjects” under the GDPR).

There are minor transparency obligations to notify individuals that they are interacting with a “limited risk” AI system. However, this requirement more closely resembles a product labelling requirement, and is not equivalent to the kind of GDPR “fair processing notice” that data privacy specialists will be familiar with. Instead, the “transparency” obligations under the AI Act primarily focus on the provision of Instructions for Use for those organisations that deploy AI systems provided by a third party (Article 13).

What does this mean for your organisation? As an ethical developer or deployer of AI systems, you may want to consider the steps your organisation can take to solicit the views of individuals in developing and deploying AI systems, and make individuals more central to the development and deployment of AI systems. How will you ensure transparency for individuals? Should individuals have an opportunity to provide their views before the AI system is placed on the market? What kind of information will you provide to users, bearing in mind what you may already be providing under the GDPR?

  1. Squaring the circle: mapping your AI supply chain.

The lion’s share of obligations under the AI Act fall on “providers” of AI systems, defined as the person or body that develops an AI system (or has that system developed) and places that system on the market or puts it into service under its own name or trade mark (Article 3).

The AI Act also places obligations on other operators, such as:

  • The “user” under whose authority the system is used (typically the organisation deploying the system).
  • The “authorised representative”, who must be appointed by any provider outside the EU to perform certain tasks pursuant to a written mandate from the provider.
  • The “importer” established in the EU that places on the EU market an AI system from outside the EU.
  • The “distributor” i.e. any person in the supply chain other than the distributor or importer, that makes an AI system available on the EU market.

These definitions are similar to those used in other legislation based on the NLF, and have traditionally worked well for physical products. But these definitions often don’t fit neatly into the software context, as those who are familiar with Software as a Medical Device under the EU MDR may know. The supply chain for AI systems tends to be even more complex than for Software as a Medical Device. This means that correctly assessing your AI supply chain will be essential in identifying the scope of your compliance obligations.

What does this mean for your organisation? It will be essential to map your supply chain in order to correctly identify your obligations under the AI Act. This will not always be a straightforward exercise, given that AI supply chains often involve a complex web of procurement, use of open source software, further development, outsourcing, etc. For example, your organisation may procure an AI system from a third party that is not high-risk, but the AI system may eventually become high-risk through further modifications by your organisation’s developers, or the developers of a third party. This may mean that your organisation assumes the role of provider of a high-risk system (Article 23a). This would trigger an avalanche of compliance obligations under the AI Act that your organisation may not be capable of assuming in practice, including around conformity assessment, record-keeping, post-market monitoring, etc.

Author

Jaspreet is a Senior Associate, and advises clients on complex issues at the intersection of healthcare, data and technology. Her practice has a particular focus on accessing and using patient data, innovative collaborations with hospitals, and the use and regulation of AI in the healthcare space.