What makes the Regulatory Sandbox that Spain has just approved (on November 9; coinciding with the Spanish presidency of the Commission of the European Union) so interesting?

It is a sort of practical “general rehearsal” of compliance with the future EU AI Act. Participating companies will be able to make mistakes and learn without incurring the legal risks that will follow from non-compliance with the EU AI Act, when it enters into force. In addition, it will be done in an environment in which they can influence how these obligations will be interpreted in a practical way by the competent authority in Spain, and in practice also potentially influence other European authorities that will analyze the results of the regulatory sandbox).

The different drafts of the EU AI Act establish a very complex set of obligations and also very severe penalties (exceeding those under the GDPR). Companies operating in Europe should already be assessing the potential impact of the EU AI Act and starting to take steps to prepare for compliance with it, even before its final approval, as the two year implementation period will not be enough to adjust to the requirements of the Act.

The Regulatory Sandbox provides solutions to the complex obligations of the EU AI Act by offering this practical “general rehearsal”. Specifically these are the key points of the Spanish regulation governing this controlled testing environment (Real Decreto 817/2023):

  • It wants to study the “operationalization” of the obligations that the EU AI Act establishes on the development and uses of (i) high-risk artificial intelligence systems; (ii) general purpose artificial intelligence systems; and (iii) foundational artificial intelligence models.
  • It focuses on both “provider” and “user” entities of these three types of AI.
  • It establishes how the entities that want to join the Regulatory Sandbox will be chosen and what requirements they must have (e.g. at the level of current privacy compliance).
  • It stipulates a set of documentation and IT requirements/obligations (identical to some of those of the EU AI Act) for the use or development of these three types of AI that will be applied and analyzed in the framework of the Regulatory Sandbox at a practical level. Among the most significant are: (a) the implementation of a risk management system for such AI; (b) if there is training with data, the application of quality criteria in such training, validation and testing; (c) elaboration of numerous technical documentation; (d) enablement of automatic event recording systems (logs); (e) AI sufficiently transparent for users to be able to interpret its results and avoid discriminatory bias; (f) elaboration of complete instructions for use; (g) the need for the AI system to allow for human supervision; (h) configuration of the AI system to have “adequate level of accuracy, robustness and cybersecurity “.
  • It outlines how the dynamics of the Sandbox will work, which we summarize below:
    • Participants will be offered technical guidance and personalized advice on how to comply with these requirements and obligations.
    • An implementation plan for these requirements will be agreed upon for each AI system/use.
    • Collaborative learning meetings will be held at least monthly.
    • Participants will conduct (i) a self-assessment of compliance with the established requirements/obligations and (ii) a gender impact report.
    • The competent authority will assess the self-assessment of each participant. It will then either make the participating entity revise some of the points or validate that the declaration of conformity is sufficient (there will be a final accreditation document).
    • The Technical Guides will be updated.
    • There will be a subsequent follow-up of the AI risks established by the participants.
    • Incidents that could be considered a breach of the law must be reported at all times.

In summary, the Regulatory Sandbox offers a unique opportunity to verify compliance with complex obligations and to have the opportunity to make mistakes and learn without exposing oneself to the enormous potential economic sanctions under the EU AI Act. Likewise, the fact that the Spanish artificial intelligence authority validates the company’s Declaration of Conformity is something that will provide a differential value in the market to companies that successfully participate in the Spanish Regulatory Sandbox and will even be able to influence how these obligations are interpreted.


You can view the Spanish version of this post here.

Author