In April 2021, the European Commission has proposed the first ever legal framework on Artificial Intelligence (The AI Act), which addresses the risks of AI and positions Europe on a leading role globally. The AI Act (2021/0106(COD)) together with the Annexes (COM(2021)0206) applies to the development, deployment, and use of AI in the EU or when it affects people in the EU. The Regulation adopts a risk-based approach, that includes the assessment of the risk level and impact of AI technology in the context of its' specific purpose and use. It is necessary to ensure, that the AI you may develop, or use is safe and trustful. In general, the Regulation contains the most important provisions for essentially high-risk AI systems wishing to get access to the EU internal market, that must be respected.
Find out more information about the AI Act
The AI Act remains to be a draft, but after the approval of implementing several compromise amendments, it is hoped to end all the negotiations regarding the adoption of the final European AI Act. According to the prognose, the Act will enter into force before the end of 2023. The purpose of this overview is to inform potential providers of the AI systems about the main principles, rules, requirements, and changes of the AI Act on the basis of compromise. Furthermore, all the information provided here will help to officially register the AI system to the European market in accordance with the EU law.
Artificial intelligence (AI) is a rapidly evolving group of technologies that can and already contributes to a wide array of economic, environmental, and societal benefits across a wide range of industries and social activities. In certain cases, the artificial intelligence may generate risks and cause physical, psychological, societal, or economic harm to public or private interests and fundamental rights protected by Union law. Therefore, the main objective of the Regulation is to «facilitate the uptake of human-centric and trustworthy AI and to ensure a high-level protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of AI systems in the Union while supporting innovation and improving the functioning of the internal market». In order to achieve it, the Regulation establishes a legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence in conformity with EU values and ensures the free movement of AI-based goods and services cross- border.
It is important to note that AI systems in the Union are subject to relevant product safety legislation that provides a framework protecting consumers against dangerous products in general. Besides, the Regulation does not seek to affect the application of existing Union law governing the processing of personal data or protection of fundamental rights to private life and the protection of personal data. The framework applies mainly to:
In conclusion, the legal framework applies if you develop, deploy, or use AI system in the EU. It doesn’t matter whether you locate inside or outside the EU. However, it doesn’t apply when you use AI for private or non-professional purposes.
At the stage of development of a certain AI system, the provider shall assess the potential risks that might cause unforeseeable problems primarily to the public, but not to the provider. Therefore, the AI Act simplifies the procedure of such independent risk assessment by providing a range of specific classifications. The AI Act follows a risk-based approach and establishes obligations for providers, as well as users (deployers) and other parties depending on the level of risk the AI system can generate. There are 4 categories of risk: prohibited practices or unacceptable risk, high risk, limited risk, or minimal risk AI systems.
Limited risk means that the certain AI systems may cause specific risks of manipulation. In this case, such AI systems shall meet specific transparency obligations, which apply for AI systems that:
The main obligation that arises from the Regulation to the providers of limited-risk AI systems is that natural persons must be notified that they are engaging with an AI system, for example, by providing a disclaimer. This gives to the natural persons an opportunity to decide upon whether they want to use the AI system in such circumstances or not. It is important to note that generating or manipulating the content is subject to exceptions for legitimate purposes (law enforcement or freedom of expression). There are no other requirements for the providers of limited risk AI systems to gain access to the EU market. The limited risk AI systems include AI chatbots, inventory/management systems, customer and management segmentation systems, emotion recognition and biometric categorisation and systems generating deep fake or synthetic content.
Minimal risk AI systems pose minimal or no risk to the natural persons’ fundamental rights. Many of such systems are already in exploitation around the world and most of them fall into this risk category. There are no obligations or restrictions to minimal risk AI systems, so the providers are free to develop such systems and put them into market. Minimal risk AI systems include, for instance, AI-enabled video and computer games and spam filters.
High-risk AI systems should only be placed on the Union market, put into service, or used if they comply with certain mandatory requirements. The classification of the AI system as high-risk is based on its’ intended purpose, area of performance and existing harmonisation legislation. Pursuant to the Article 6 of the AI Act, the high-risk AI systems are classified into two categories:
Annex II of AI Act covers a wide range of products, such as toys, lifts, equipment, and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
Furthermore, AI systems intended to be used for a specific purpose, i.e. falling under at least one critical area or use case listed in Annex III, shall also be considered as a high-risk only, if they pose a harmful risk to the health, safety, fundamental rights, or the environment. You can find all the critical areas and the intended purposes of high-risk AI systems under Annex III in a table below.
Area | Intended purpose |
---|---|
Biometric and biometrics-based systems | Biometric identification of natural persons or to make inferences about personal characteristics on the basis of biometric or biometrics- based data, including emotion recognition systems (except prohibited practices). |
Management and operation of critical infrastructure | Safety components in the management and operation of road traffic and the supply of water, gas, heating, electricity and critical digital infrastructure. |
Education and vocational training |
|
Employment, workers management and access to self-employment |
|
Access to and enjoyment of essential private services and public services and benefits (healthcare services, housing, electricity, heating/cooling, internet etc.) |
|
Law enforcement |
|
Migration, asylum, and border control management |
|
Administration of justice and democratic processes | Assistance in researching, interpretation of facts, application of law based on concrete facts, use for dispute resolution |
This list of high-risk areas and cases of use is non-exhaustive due to the right of the European Commission to amend adding or modifying the Annex III, if a certain AI system pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law. The probable or actual risk itself should be equivalent to or greater than the risk or impact posed by the high-risk AI systems already referred to in Annex III. Significant risk of harm combines the assessment of two main elements: (a) severity, intensity, probability of occurrence and duration level of the risk; and (b) scope of influence.
Providers of high-risk AI systems, that consider their system does not pose a significant risk of harm, shall inform the national supervisory authorities by submitting a reasoned notification. The procedure for classifying the AI system to get a right to freely place the AI system on the market shall meet the following steps:
AI systems classified as ‘high-risk’ must meet the mandatory requirements set out in Chapter 2 of the AI Act before, as well as after putting them on the market. This is necessary in order to proceed with the mandatory conformity assessment and registration. It should be noted that some specific requirements can be found throughout the entire Regulation. High risk AI system shall comply with the following key requirements:
Processing of data may cause significant risks to the fundamental rights. Compromise points out the importance of data protection measures in accordance with EU data protection law. Also, it is necessary to consider methods and standards to reduce the use of resources and energy through the monitoring and reporting due to the environmental impact.
In conclusion, before placing on the market or putting into service the high-risk AI system, the provider should ensure that all documentation, instructions, and systems are effective, appropriate and comply with the requirements provided above.
Prohibition of certain unacceptable AI practices is necessary because they go against EU’s values or violate the natural persons’ fundamental rights.
The AI systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring, are considered to pose an unacceptable risk. The list of such practices, being also discriminatory and intrusive, was broadened by the latest amendments. According to the new compromise text of the Regulation, the prohibition of the following practices should also be considered:
Providers of high-risk AI systems must ensure and verify that their high-risk AI systems successfully meet all provisions and standards through the written verification process of compliance. According to the EU law, the goal of conformity assessment process is to demonstrate the fulfilment of specific consumer protection and conformity with the AI systems’ requirements. In case of non-fulfilment, the implementation of remedial measures is required during the process. The conformity assessment focuses on procedural aspects, such as the assessment of compliance with the harmonized standards, technical documentation requirements, quality management system and post-market monitoring system. Also, conduction of conformity assessment depends on the type of product and the level of risk it may pose to the public.
Prior to conformity assessment, providers shall already have all technical documentation referred in the requirements above and a quality management system in the form of written policies, procedures, or instructions. The quality management system shall include:
A harmonised standard is a European standard developed by a recognised European Standards Organisation, following a request from the European Commission2. Harmonised standards facilitate and simplify the technical side of providers’ conformity assessment and enable them to ensure security, privacy, transparency, data protection and accessibility of high-risk AI system. If there is no reference to harmonised standards already published in the Official journal of the European Union related to the applicable requirements, the European Commission my adopt common specifications based on the requirements for high-risk AI systems. However, the providers can justify the compliance through the adoption of equivalent technical solutions in respect of the requirements only if the compliance with common specifications can’t be achieved. High-risk AI systems are also considered to be in compliance after being tested on data related to the specific geographical, behavioral, contextual and functional setting.
There are two variations for conducting the conformity assessment: self-assessment (providers’ assessment) or risk assessment by a qualified independent third-party referred to as a “Notified Body”. In other words, Regulation allows the provider to choose between one of the following procedures:
Internal control requires the provider to verify that the established quality management system is in compliance with the requirements, examine the information contained in the technical documentation and verify that the design and development process of the AI system and its post-market monitoring is consistent with the technical documentation.
External control is also based on assessment of quality management system and assessment of technical documentation. The external control should be foreseen for the AI systems intended to be used for the remote biometric identification, or to make inferences about personal characteristics of natural persons based on biometric or biometrics-based data, including emotion recognition systems, to the extent these AI systems are not prohibited. However, according to the compromise amendments, there should be more potential opportunities for the application of third-party conformity assessment due to the complexity of the high-risk AI systems. In case of conformity assessment in relation to described areas, the provider shall provide an application to the notified body for the examination of quality management system, as well as technical documentation.
Data for examination of the quality management system | Data for control of the technical documentation |
---|---|
|
|
The application shall be assessed and examined by the notified body. Upon the examination and control of application, the notified body notifies the provider of the decision, which shall contain all the conclusions and the reasoned assessment decision. The notified body issues an EU technical documentation assessment certificate, whereas the approved quality management system is subject to periodic surveillance (audits and reports). The certificate shall be valid for the period it indicates, which shall not exceed four years.
If the high-risk AI system complies with the requirements, provider shall draw up a written EU declaration of conformity and affix the CE marking regardless of the conformity assessment type. Although, in case of CE marking after external control the identification number of the responsible notified body should be affixed by the body itself or, under its instructions, by the provider’s authorised representative. EU declaration of conformity reflects the providers’ responsibility for compliance with the high-risk AI system requirements, which shall contain the following elements:
The CE marking is subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008 and shall consist of the official initials ‘CE’. In general, the CE marking shall be affixed visibly, legibly, and indelibly only by the provider or his authorised representative (with the exception for notified body) to products that fall under specific harmonisation legislation or other Union legislation which also provides for the affixing of the CE marking before placing the AI system to the market. Also, it should be the only marking that reflects the provider’s responsibility for the conformity, whereas the additional marking indicating a special risk of use is allowed. Digital CE marking is possible to affix, if there is an easy access to it via interface of AI system, code, or other electronic means.
The high-risk AI systems must be registered in the EU database before putting it into service in order to ensure transparency for the public and support the development of artificial intelligence field. The public EU database will be set up, controlled, and maintained by the Commission. It is recommended to get acquainted with the data that shall be entered into the EU database for the providers before the EU database and the AI Act itself with the compromise amendments enter into force. It should be noted that the provider can proceed to registration only after the performed conformity assessment with necessary documentation. If the developer and subsequently the provider decides to put the high-risk AI system to the EU market, the following information shall be provided and kept up to date to be registered:
This information is dedicated to the legal persons or organisations, who want to perform the conformity assessment as a notified body under the AI Act. Notified body is a conformity assessment body that performs third-party conformity assessment activities, including testing, certification, and inspection. In order to become the notified body, the organization should be designated by the national notifying authority of the Member state they are established after applying for notification. The application shall include a description of the conformity assessment activities, module or modules of conformity assessment and the artificial intelligence technologies for which the conformity assessment body claims to be competent, as well as by an accreditation certificate. In Estonia, the certificate can be issued by Estonian Centre for Standardisation and Accreditation.
Furthermore, there is a range of requirements that the notified bodies must follow. It is clear, that the notified bodies’ main obligation is to verify the conformity of high-risk AI system, but they shall also satisfy the organizational, management, recourse, process and minimum cybersecurity requirements to carry out the duties. The latest shall be performed independently form the provider by the employees with sufficient knowledge who are not connected to the provider before and after providing such services in 12-month period equally. Notified bodies shall have applicable procedures and internal competences (availability of sufficient administrative, technical and scientific personnel) for their conformity assessment activities and participate in coordination activities.
There are two notifying authorities in Estonia:
As regards the procedural aspects of notification, the organisation shall consider the following steps:
After completing the procedure of notification, the Commission shall assign one identification number to a new notified body regardless of the amount of Union acts the body is notified under. If the notified body does no longer fulfill the obligations and requirements in relation to conformity assessment, the Commission may start an investigation based on the information provided from national authority and take corrective measures, such as suspension or withdrawal of the notification.
The AI Act lays down some rules on penalties and administrative fines for infringements related to non-compliance with the requirements, obligations, or prohibitions. It must be noted that the compromise lays down new sums compared to the original Act. Since the compromise is approved, the following illustrates the cases of non-compliance and the estimated penalties according to the latest compromise text:
The penalties shall be decided according to the general principle of effectiveness, dissuasiveness, and proportionality of performed offence or actions. The individuality, nature, duration or character of offence can impact the size of the penalty.
1 The standardised template will be developed by the Commission in the near future.
2 References of harmonised standards and of other European standards published in the OJEU are available on the official website of the European Union: https://single-market-economy.ec.europa.eu/single-market/european-standards/harmonised-standards_en
3 The electronic list of notified bodies. https://ec.europa.eu/growth/tools-databases/nando/
4 Blue Guide - guidance on the application of all aspects of the single market for products including the role and organisation of notified bodies.
Connect with our experts
Our experts will tell you how to do it as quickly and easily as possible