By Mike Knight

A new set of European Commission rules, called the AI Liability Directive, could make it easier for people injured in AI accidents to sue.

Why?

The European Commission says that the new proposals, which could become the first-ever legal framework on AI are needed because:

  • AI developers, deployers and users need clear requirements and obligations regarding specific uses of AI. 
  • The administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs), need to be minimised.
  • Some AI systems create risks. For example, it’s not possible to find out why an AI system has made a decision or prediction and taken a particular action and may, therefore, be difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.  
  • Although existing legislation provides some protection, it is not currently sufficient to address the specific challenges AI systems may bring. 

What Will The Proposed New Rules Cover?

The proposed rules will:

  • Address risks specifically created by AI applications.
  • Propose a list of high-risk applications.
  • Set clear requirements for AI systems for high-risk applications.
  • Define specific obligations for AI users and providers of high-risk applications.
  • Propose a conformity assessment before the AI system is put into service or placed on the market.
  • Propose enforcement after such an AI systems is placed in the market.
  • Propose a governance structure at European and national level.

Risk-Based

The European Commission’s rules will take a risk-based approach in deciding the strength of rules that will be applied to different AI systems. For example:

  • AI systems which pose an ‘unacceptable risk’, e.g. if they are considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour. 
  • AI systems identified as high-risk, e.g. AI technology used in critical infrastructures, safety components of products, essential private and public services (credit scoring), law enforcement that may interfere with people’s fundamental rights, and similar will be subject to strict obligations before they can be put on the market. For example, this could include risk assessment and mitigation systems, logging of activity to ensure traceability of results, providing clear and adequate information and documentation to the user, or appropriate human oversight measures to minimise risk. Biometric identification is an example of a ‘high risk’ AI system.

In addition to the unacceptable risk and high-risk categories, the European Commission’s proposed regulatory framework for AI systems will also include two more categories:

  • Limited risk – AI systems with specific transparency obligations, e.g. chatbots, where users must be made aware that they are interacting with a machine.
  • Minimal, or zero risk. For example, the proposed rules will allow the free use of minimal-risk AI such as AI-enabled video games or spam filters.

Who’s In Charge Of What?

Giving a high-risk AI system as an example, the European Commission says “once an AI system is on the market, authorities are in charge of market surveillance, users ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.” 

Also, the European Commission recognises that the fast-evolving pace of AI technology, plus the fact that AI applications should remain trustworthy even after they have been placed on the market, will mean ongoing quality and risk management by providers will be needed to future-proof the legislation. 

Part Of A Wider Package

The European Commission says that the proposed new rules are part of a wider AI package, which also includes the updated Coordinated Plan on AI. It is hoped that the regulatory framework and coordinated plan will guarantee the safety and fundamental rights of people and businesses when it comes to AI and could also strengthen uptake, investment, and innovation in AI across the EU. 

What Does This Mean For Your Business?

There have been concerns for some time that the introduction of AI systems, some of which could pose a high risk, is happening before adequate regulation has been put in place. The proposed new rules, which are part of a wider set of rules governing AI, will help plug the gaps in existing legislation while helping with futureproofing, provide clearer guidance to developers, deployers and users, plus provide more protection for people since many of our lives will be subject to decisions made by AI systems in the near future and beyond. The greater clarity provided by the new rules could also mean a boost for AI systems and greater uptake among businesses, and that compensation for the effects of a defective AI system can be sought form the AI-system provider or from any manufacturer that integrates an AI system into a product.

Doubtless, the legal sector will see many forthcoming precedents and case law being comprehensively extended.

Back To Latest News

Comments are closed.