The European Commission’s proposal for AI Regulation: The Path Towards a Trustworthy

The European Commission’s proposal for AI Regulation: The Path Towards a Trustworthy

10.03.2022.

In 2021, the European Commission adopted a proposal for a regulation laying down harmonised rules on artificial intelligence (‘AI Act’). The proposal has been described as the first-ever legal framework on artificial intelligence along with a coordinated plan that shall guarantee fundamental rights and safety while enhancing AI innovation and investment. Namely, the Commission recognised the advantages AI-based systems can bring to society and took a positive step forward to regulating and reducing possible negative impacts of the currently most thrilling technology.

Artificial intelligence has been broadly defined in Article 3 (1) of the Proposal as software developed with one or more of the techniques and approaches listed in Annex I (such as machine learning approaches, logic-and-knowledge based approaches, statistical approaches, Bayesian estimation, and search and optimisation methods) and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. The definition has been influenced by the OECD’s interpretation of AI in its 2019 Recommendation of the Council on AI.

The purpose of this novel AI legal framework is to support the objective of the EU being a global leader in the development of a trustworthy and secure AI. The term ‘trustworthy AI’ has been introduced in the 2019 Ethics Guidelines for Trustworthy AI by the High-Level Expert Group that set out seven key requirements that AI-based systems should meet to be deemed trustworthy such as human oversight, accountability, privacy and data governance, transparency, non-discrimination and fairness, and the environmental and societal well-being.

The central part of the proposed AI Act is a risk-based approach that prohibits particular unacceptable uses of the intelligence, and strictly regulates some other AI-based systems that carry significant perils. We could picture it as a pyramid and start from the top of it. Article 5 prescribes AI practices that shall be prohibited such as systems or applications that manipulate human behaviour and systems that permit ‘social scoring’ by governments. In the second base of the pyramid we have high-risk AI-based systems that involve the use of AI technology in education, law enforcement, administration of justice, safety components of products, critical infrastructures and other significant areas of the society and therefore are subjected to strict obligations before entering the market. Namely, any system of remote biometric identification is deemed high risk.

The third layer from the top includes limited-risk AI systems. The defining feature of AI-based systems that fall into this section is that they raise particular issues in relation to transparency. For instance, technologies such as deep fakes and AI-based systems that are created to interact with human beings are considered as practices that stand in need of specific transparency requirements. Finally, the base of the pyramid includes AI-based systems of minimal risk such as spam filters or AI-enabled video games. According to the Commission, the majority of AI practices fall into this category. For providers of such AI-based systems, the European Commission is recommending voluntary codes of conduct as laid down in Article 69 of the Act.

Scroll