On April 21, the European Commission (“EC”) presented the long awaited Proposal for a Regulation on a European Approach for Artificial Intelligence proposing a single set of rules to regulate Artificial Intelligence (“AI”) in the European Union (“Proposal for an AI Regulation” or “Proposal”).

Through the new rules the EC aims to achieve a high level of protection for the fundamental rights of European Union’s (“EU”) citizens when using AI while fostering the development of the technology. If approved the Proposal for an AI Regulation will establish an uniform legal framework that, on one side, contains strict rules for high-risk AI and heavy penalties for non-compliance but, on the other hand, offers opportunities for easier AI development through cooperation with supervisory authorities, creation of regulatory sandboxes, measures to reduce the regulatory burden of small-scale providers and users/start-ups, amongst others (Articles 53 to 55 of the Proposal).

The provisions of the Proposal follow closely the EU’s strategy to regulate technologies. Some of the new proposed obligations are similar to those already provided in other instruments such as the GDPR (which, for example, already regulates AI when used to process personal data). In fact, the solutions and interpretations proposed may broaden the interpretation of provisions existing in current legislation.

 
1. Scope of application
 
The Proposal for an AI Regulation was written with a broad scope of application, incorporating and establishing obligations for most players in the AI supply-chain including:
  • Providers of AI systems;
  • Entities using AI systems (known as “Users” in the Proposal);
  • Importers, distributors, product manufacturers and authorised representatives.

Geographically, the Proposal will be applicable even when an entity is not established in the EU, as long as the AI system is to be placed or put into service in the European Market or the output produced by the system is used in the EU. 

2. Forbidden AI uses

Under the proposal, a number of uses of AI would be strictly forbidden in the EU, including:
a) AI systems that deploy subliminal techniques beyond a person’s consciousness to distort the person’s behaviour in an harmful manner;
b) AI systems targeting vulnerabilities of a specific group in an harmful manner;
c) social scoring;
d) use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except under one of the exceptions provided for in the Regulation (Article 5 of the Proposal).

 

3. High-risk AI

In an attempt to limit the chilling effect of the new rules, most of the legal requirements under the Proposal for an AI Regulation are targeted only to high-risk AI (Articles 6 to 51 of the Proposal).

The Proposal, in its Annex III, contains a list of uses that should be considered high-risk (to be read along the cases in Article 6(1) points a) and b)), under which, for example, certain Medical Devices and uses of AI in motor vehicles may also be considered high-risk, More specifically high-risk AI systems are those:

  • Intended to be used for “real-time’ and ‘post’ remote biometric identification of natural persons;
  • Intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity;
  • Determining access, assigning or assessing persons in the context of educational or vocational training institutions or assessing participants in tests commonly required for admission to educational institutions;
  • Used in recruitment as well as for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating work performance and behaviour;
  • Used to determine the creditworthiness of persons or establish their credit score.

The list contained in Annex III can (and, in fact, likely will) be expanded by the Commission through delegated acts (based on the criteria contained in Article 7 of the Proposal). The EC will assess the need to amend the high-risk list once a year.

 

4. Obligations for high-risk AI

Providers are particularly burdened by the new rules and must ensure that their AI systems comply with the requirements in Articles 8 to 15, which include key obligations such as:

  • Establishing appropriate data governance practices and ensuring that data sets are relevant, representative, free of errors and complete;
  • Documentation and record-keeping, including traceability of decisions;
  • Transparency to enable the user to understand and control the AI system;
  • Establishing, implementing and documenting a risk management system;
  • Guaranteeing accuracy, robustness, cybersecurity and ensuring human oversight.

In addition, providers must also ensure that the AI undergoes the relevant conformity assessment procedure, write the technical documentation in accordance to the requirements in Annex IV, implement a quality management system, report to national competent authorities any serious incidents or malfunctioning constituting a breach of obligations intended to protect fundamental rights and register the AI system under the legally required procedure.

A number of obligations also apply to User entities such as the obligation: a) to monitor high-risk AI for serious incidents or malfunctions or signs of presenting a risk at a national level; b) store logs when they are under its control and; c) when exercising control over input data, ensuring that it is relevant for the AI system’s purpose
In addition, it is important to note that other parties (including User entities) may be considered as a provider and assume the correspondent obligations when putting a system in the market or into service under their name or trademark, modifying its purpose or making substantial adaptions to an AI system.

 

5. Specific transparency obligations for certain AI systems

Certain types of AI are subject to special obligations due to their nature (regardless of whether they are considered high-risk), namely:

  • AI indented to interact with natural persons must provide information regarding its nature as an AI system;
  • Emotion recognition and categorisation systems must notify users of their use;
  • AI used to produce deep fakes shall disclose that the content has been artificially created or manipulated.


6. AI supervisory authority

Member States shall designate a national supervisory authority to supervise the application and implementation of the Regulation. A new European body, similar to the European Data Protection Board will be created: the European Artificial Intelligence Board.

 

7. Penalties

Entities that: a) do not comply with the prohibition against certain AI uses under Article 5 (see section 2, above) or  b) fail to implement data governance and data set management rules under Article 10 (section 4, above) shall be subject to fines up to 30 000 000 EUR, or if the offender is a company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. Providing incorrect, incomplete or misleading information to notified bodies and national competent authorities may result in a fine up to 10 000 000 EUR or up to 2 % of the total worldwide annual turnover. Non-compliance with any other requirement or obligation under the Regulation may result in a fine of up to 20 000 000 EUR or up to 4% of the total worldwide annual turnover.

Member States may establish additional rules regarding penalties. However, these cannot derogate from the above and must be effective, proportionate, and dissuasive.

 

8. Next Steps

The Proposal marks the (official) start of the EU legislative procedure and will be followed by trialogue discussions to reach a decision on the final text. 

It is expectable that negotiations take place throughout this year. In addition, the Proposal puts forward a period of 1 or 2 years preceding its full application. With this in mind, any entity which is starting to develop an AI-based product or service or will implement one and expects it to still be in use in the referred timeframes should future-proof its AI solution to avoid the costs of ex post compliance with the Regulation or having to take the product or service out of the market.