The AI Act’s Blueprint for Responsible AI
Following 3-day marathon debates, the Council and the European Parliament reached a provisional agreement on the law on 9 December 2023. The AI Act harmonises rules on AI systems making sure that they comply with fundamental rights and EU values. The final text still needs to be formally adopted and should apply from 2026.
💬 We have a message for you, and rest assured, it is not #AI-generated.
— Digital EU 🇪🇺 (@DigitalEU) December 9, 2023
Welcome to the world’s first-ever rules on #ArtificialIntelligence!
✅ Today’s political agreement on the 🇪🇺 #AIAct will ensure AI develops in a human-centric, transparent & responsible way.
Read more ↓
The new law is one of the world’s first attempts to limit the use of this rapidly evolving technology with wide-ranging economic and societal implications. The AI Act’s main purpose is to enhance Europe’s position within the global AI hub, from the lab to the market, and harness the potential of this technology for industrial use.
The main novelty delivered by the AI Act is a classification system that establishes the level of risk an AI system could pose to human well-being. The framework encompasses four main risk tiers – unacceptable, high, limited and minimal/no risk.
Defining AI systems
To define AI, we will use the widely accepted definition provided by the European Commission’s Independent Expert Group – Artificial Intelligence (AI) refers to systems that display reasonable, intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.
AI-based systems can be either purely software-based (image analysis software, search engines, face recognition systems, voice assistants) or embedded in hardware devices (autonomous vehicles, drones, IoT applications, advanced robots).
AI regulation encompasses a wide range of topics. We will be releasing a series of articles delving into various aspects of this subject. In this particular article, we are going to explore the relationship between AI and corporate compliance, considering the existing legislative framework and recommendations in the Republic of Serbia. Let’s take a look.
The Strategy for the Development of Artificial Intelligence in the Republic of Serbia for the period 2020-2025
The Serbian Strategy for the development of AI for the period 2020-2050 laid down objectives and measures for the development of AI which should result in economic growth, development of new skills and the improvement of public services.
The Strategy aims to achieve several objectives. When it comes to the business sector, it is determined that it should adapt to novel working models and market dynamics accompanied by a development of business entities based on the use of AI. The legal regulations should be adapted to new circumstances and the requirements of new business models, product developments and services powered by AI, while taking into consideration overall safety.
Now, let’s examine several measures outlined by the Strategy that hold significance for the business sector.
Support for startups and SMEs
This measure aims to achieve several key objectives. Primarily, its purpose it to set out mechanisms for communication, exchange of experiences and improvement of knowledge of startups and SMEs in the field of AI.
Apart from providing expert services within divergent spheres such as economy and law to startups and SMEs, this measure aims to enable startups and SMEs to utilise the technological infrastructure under favourable conditions such as high-performance computer systems suitable for machine learning.
The Strategy adds that it aims to connect startups and SMEs in the field of AI with the institutions in the public sector that could be a source of data for machine learning projects as well.
Continuous monitoring and analysis of the state of AI
The Strategy states that AI is a part of a wide array of activities within various companies and therefore, it is important to establish continued analysis and monitoring to obtain a clear image about how companies, in which way, and with what resources are working with AI.
This should be done by gathering data through questionnaires filled by business entities and delivered to the Statistics Office of the Republic of Serbia for the official statistical research. This type of obligatory data includes information whether a company is using AI within its business operations, whether it develops products based on AI and which methods are being applied to do so.
Ethical principles and safety standards of AI
The Strategy highlighted that the application of AI-based systems comes along with several ethical and security challenges. Ethics and safety standards should be ensured in relation to the protection of personal data, protection against discrimination when applying machine learning methods, and the establishment of responsible development in accordance with relevant international principles.
As the main goal the Strategy highlights the introduction of prevention mechanisms that will enable the responsible creation of AI and methods of verifying that systems based on machine learning are compliant with prescribed ethical and safety standards.
Ethical Guidelines for development, implementation and use of robust and accountable AI
The Government of the Republic of Serbia adopted a Conclusion this year endorsing the Ethical Guidelines for the development, implementation, and use of reliable and responsible AI. Serbian Guidelines reflect the main principles laid down in the 2021 UNESCO Recommendation on the Ethics of AI and relevant EU legal documents.
Main actors covered by the Guidelines
The Guidelines aims to establish a horizontal approach to the application of rules, and therefore covers the following range of actors:
- Persons working on the development and/or implementation of AI systems
- Persons who use AI systems primarily for their work, which includes interaction with other persons (for example, market participants)
- Persons using AI systems who are either directly affected by the systems (for example, they use systems to access public services) or indirectly affected by the systems (for instance, they are a part of a rare disease research group where medical data is processed as part of Serbia’s strategy to enhance public health)
- General public
Main points covered by the Guidelines
High-risk AI systems
Following a description of terms and concepts used within the Guidelines, the document explains the scope and use of high-risk AI systems (remember the classification laid down by the EU AI Act). A high-risk AI system refers to a system that tends to directly or indirectly violate the principles and conditions laid down in these Guidelines, without necessarily doing so. According to this document, high-risk systems are not deemed undesirable, yet they need to be specifically assessed.
To classify a particular AI system as a high-risk system, it is not important whether the system is in use nor it is relevant whether it constitutes a standalone product and/or service or presents an integral part of another product/service.
Data Protection
Secondly, the Guidelines embrace the importance of data protection. We are going to explain the relationship between data protection and AI systems in one of our future articles due to the topic’s complexity. For example, when it comes to the B2B context, factors contributing to the complexity and uncertainty can be listed as the potential applicability of different legal regimes (e.g. data cuts through divergent areas of law), the number of parties involved in the data value chain, and the availability of appropriate enforcement mechanisms.
The sole interaction between AI and the GDPR engages several legal and technical questions that require a set of valuations. In most cases, the valuation can be documented through data protection impact assessments or legitimate interests’ assessments.
Long story short, the Guidelines laid down the obligation to carry out a data processing impact assessment in particular circumstances related to profiling and the legal position of a natural person, law enforcement activities and the monitoring of publicly accessible areas.
Requirements for accountable AI systems
After listing main principles embraced by the Guidelines, the document lays down requirements for robust and accountable AI systems, differentiating between technical and non-technical methods. While technical methods are provided in the for of recommendations, non-technical methods are used to examine organisational and similar non-technical elements vital for the development and use of AI systems and are provided in a form of a questionnaire to assist individuals and organisations.
What is an AI Acceptable Usage Policy (AUP)?
The AI Act delineates a series of responsibilities for providers of AI-driven systems, particularly concerning those categorized as high-risk. For the moment, let’s set aside the requirements outlined by the AI Act directed at providers and distributors of AI systems. Examining the business practices of numerous companies reveals a widespread adoption of Generative AI. Observing global trends, it becomes apparent that there is a growing necessity to formulate AI Acceptable Usage Policies to define expectations for the utilization of such tools.
Before creating any type of policy, you need to understand what such a document presents. A policy refers to a type of document that communicates required and prohibited conduct and activities, reflecting the broader goals and corporate culture. Prior to implementing a policy, you need to gain a higher level of understanding what Generative AI is, and the organizational needs of your company, along with the relevant regulatory landscape.
The policy should be specifically tailored to meet your company’s unique needs, but there are a few sections that should be highlighted. For example, it is vital to explain who is impacted by the policy’s scope, whether there is a governance procedure set in place to handle responsibility, acceptable terms of use, along with legal and ethical compliance.
Similar to other policy documents, an AUP is a living document as well. Simply put, it needs to be updated in line with emerging use cases, market expectations and regulatory novelties.
Please note that this piece does not offer legal advice but rather represents the author’s standpoint.