On 17 July, the High-Level Expert Group on Artificial Intelligence (“AI HLEG”) presented their final Assessment List for Trustworthy Artificial Intelligence. The final version of this document builds upon the initial proposal of the AI HLEG presented in the Ethics Guidelines for Trustworthy Artificial Intelligence and follows the public consultation where more than 350 stakeholders participated.


The Assessment List aims to offer a flexible tool that can be used by organizations developing or using AI to assess if their AI application complies with the concept of Trustworthy AI, as developed by the European Commission with the support of AI HLEG, and with the EU fundamental rights.

Specifically, the Assessment should be carried out by a multidisciplinary team of AI experts coming from areas such as IT, management, and legal, from inside or outside the organization, and should cover the following points:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Environmental and societal well-being and
  • Accountability

Although the AI HLEG recommendations are not binding, they will likely serve as a basis for the upcoming EU regulations and policies on AI. Therefore, the Assessment List should be reflected in any future-proof AI governance, irrespective of the organisation’s sector.

Moreover, as outlined in the European Commission’s White Paper on AI earlier this year, EU wishes to establish itself as a leader in the global AI-market. In this regard the new and recently agreed on European Budget (Multiannual Financial Framework) will bring new opportunities to innovative businesses investing in the field.

Lastly, on 23 July, the European Commission launched the Inception Impact Assessment (IIA) on the Proposal for a legal act of the European Parliament and the Council laying down requirements for Artificial Intelligence. The IIA outlines the regulatory objectives and policy options of the Commission. Stakeholders can provide their feedback until Thursday, 10 September. The input will help in the further development and fine tuning of the Commission’s initiative aiming to ensure the development and uptake of lawful and trustworthy AI across the Single Market through the creation of an ecosystem of trust. The proposal for the regulation is expected in the first quarter of 2021.

At VdA, our team of experts is delivering strategic and sophisticated legal advice to help our clients innovate and meet their AI-related needs.