We have talked a lot about AI over recent times, including the CMA’s new focus on pricing algorithms and from a data privacy perspective.  The EU Commission has now proposed a new Regulation on AI which, if approved, will directly apply across all EU member states based on a future-proof definition of AI.  

The proposed new law will need to be considered in addition to EU data privacy requirements, and the new law is indeed no joke as fines are set, for certain breaches, to range up to 30m EUR or (if higher) 6% of worldwide annual turnover.

The new rules will follow a risk-based approach based on levels of risk: unacceptable, high, limited and minimal, and include special rules for biometrics, and impact those trading in EU countries.  

Unacceptable risk

AI systems that are considered a clear threat to the safety, livelihoods and rights of people will be banned.  These include AI systems or applications that manipulate human behaviour to circumvent users' free will (e.g. toys using voice assistance encouraging dangerous behaviour by minors) and systems that allow ‘social scoring' by governments.

High-risk

AI systems identified as high-risk include AI technology used in:

- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk

- Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams)

- Safety components of products (e.g. AI application in robot-assisted surgery)

- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures)

- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)

- Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence)

- Migration, asylum and border control management (e.g. verification of authenticity of travel documents)

- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

- High-risk AI systems will be subject to strict obligations before they can be put on the market, including:

- Adequate risk assessment and mitigation systems

- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes

- Logging of activity to ensure traceability of results

- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance

- Clear and adequate information to the user

Appropriate human oversight measures to minimise risk; and

High level of robustness, security and accuracy

Biometrics

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements.  In principle, the proposed Regulation prohibits their live use in publicly accessible spaces for law enforcement purposes.  It defines narrow exceptions (e.g. where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use will be subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the databases searched.

Limited risk

Limited risk includes AI systems with specific transparency obligations.  When using AI systems such as chatbots, users will need to be made aware that they are interacting with a machine so they can take an informed decision whether to continue or not.

Minimal risk

The draft Regulation permits the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems may fall into this category and represent only minimal or no risk for citizens' rights or safety.

What do vendors need to do?

Any new products coming onto the EU market will need to undergo conformity assessments and for certain higher risk AI systems, an independent notified body will also have to be involved.  The Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI.  Additionally, voluntary codes of conduct are proposed for non-high risk AI, as well as regulatory sandboxes to facilitate responsible innovation.  However, enforcement will largely be down to Member States.

Reactions

The Commission’s proposal has not been met with universal approval. The European Digital Rights association has said that, although it is positive that the Commission acknowledges that some uses of AI are simply unacceptable and need to be prohibited, the draft Regulation does not prohibit the full extent of unacceptable uses of AI and in particular all forms of biometric mass surveillance.  It says this leaves a worrying gap for discriminatory and surveillance technologies used by governments and companies and that the Regulation allows too wide a scope for self-regulation by companies profiting from AI.  Other groups say that the approach should be rights-based, rather than risk-based. 

On the other hand, the European Parliament AI Committee says “This is a first of its kind proposal, globally, and it is likely to have a strong influence on the worldwide development of AI.  In Parliament, we now need to act on two fronts. First, we need to reduce any unnecessary burden on start-ups, SMEs, and industry so that AI can be unleashed to its full economic potential and offer clarity of rules and of process so our business and industry can thrive”. 

Next steps

The European Parliament and Member States will need to adopt the Commission's proposals in the usual legislative procedure.  Once it is, businesses will be affected if they are using AI in the EU and will need to meet the conformity requirements.  

Rather like ‘privacy by design’, it’s worth getting ahead of the game on this one.