On the 28th of September, the European Commission adopted two Proposals for Directives; one on Liability for Defective Products (PLD II) and another on AI Liability.

The PLD II will replace the 40 years old Directive 85/374/EEC - Product Liability Directive (PLD) - adapting current liability rules to the digital age, circular economy and global value chains, for instance, by specifically regulating liability for self-learning systems and defects caused by software updates, including data loss, and ensuring that consumers are protected regardless of whether the manufacturer is inside or outside the EU.

The AI Liability Directive, complements the civil liability EU regime, creating a set of rules regulating damages caused by AI systems, promoting trust, legal predictability for businesses and ensuring compensation to victims.

1. Overview

The AI Liability Directive seeks to 1) ensure victims of AI caused damage receive equivalent protection as victims of damage caused by general products; 2) reduce uncertainty for businesses using AI as to the liability they may exposed to; 3) prevent fragmentation of AI-specific liability rules among Member States. Nonetheless, this Directive does not fully harmonise national rules, allowing Member States to adopt or maintain more favourable rules.

The AI Liability Directive, by establishing clear rules, seeks to promote the adoption of AI technology in the EU, creating a baseline for businesses to evaluate their exposure to liability and increasing societal trust in AI systems. Moreover, it creates a more efficient liability regime, adapted to AI, where victims of AI caused damage can more easily make a claim and receive compensation.


2. Main Provisions





Disclosure of evidence and rebuttable presumption of non-compliance



Upon the request of a potential claimant, national courts are empowered to order the disclosure of relevant evidence about a high risk AI system, suspected of causing damage.

The order to disclose information, should only take place after the claimant has presented evidence to support the plausibility of a claim for damages and has tried to gather evidence from the defendant.

The disclosure of information should be limited to what is necessary and proportionate to support the claim.

If the defendant fails to comply with the order to disclose information, there is a presumption of a failure to comply with the duty of care.


Rebuttable presumption of a causal link in the case at fault



When all the following conditions are met, the causal link between the fault of the defendant and the output produced or failure to produce an output by the AI system shall be presumed:

  • The fault of the defendant, or the person for whose behaviour the defendant is responsible, has been demonstrated or presumed;
  • It is reasonably likely that the fault was influenced by the AI produced output or lack of it;
  • The claimant demonstrated that output or lack of it has caused the damage


3. Conclusion & Next Steps

To sum up, the AI Liability Directive will help victims of AI caused damage, by alleviating the burden of proof and increasing accessibility to relevant evidence and provide them with the same level of protection as if they were harmed under any other circumstances. In addition, it will notably increase predictability and therefore, trust in AI technologies, which has been one of the main obstacles to the adoption of AI in the EU, hence, leading to innovation and development in this field.

The Proposal will be subject to further negotiations and will need to be adopted by the European Parliament and the Council before coming into force.