The EU artificial intelligence act
In April 2021, the European Commission has proposed the first ever legal framework on Artificial Intelligence (The AI Act), which addresses the risks of AI and positions Europe on a leading role globally. The AI Act (2021/0106(COD)) together with the Annexes (COM(2021)0206) applies to the development, deployment, and use of AI in the EU or when it affects people in the EU. The Regulation adopts a risk-based approach, that includes the assessment of the risk level and impact of AI technology in the context of its’ specific purpose and use. It is necessary to ensure, that the AI you may develop, or use is safe and trustful. In general, the Regulation contains the most important provisions for essentially high-risk AI systems wishing to get access to the EU internal market, that must be respected.
The AI Act remains to be a draft, but after the approval of implementing several compromise amendments, it is hoped to end all the negotiations regarding the adoption of the final European AI Act. According to the prognose, the Act will enter into force before the end of 2023. The purpose of this overview is to inform potential providers of the AI systems about the main principles, rules, requirements, and changes of the AI Act on the basis of compromise. Furthermore, all the information provided here will help to officially register the AI system to the European market in accordance with the EU law.
Purpose and scope of application
Artificial intelligence (AI) is a rapidly evolving group of technologies that can and already contributes to a wide array of economic, environmental, and societal benefits across a wide range of industries and social activities. In certain cases, the artificial intelligence may generate risks and cause physical, psychological, societal, or economic harm to public or private interests and fundamental rights protected by Union law. Therefore, the main objective of the Regulation is to «facilitate the uptake of human-centric and trustworthy AI and to ensure a high-level protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of AI systems in the Union while supporting innovation and improving the functioning of the internal market». In order to achieve it, the Regulation establishes a legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence in conformity with EU values and ensures the free movement of AI-based goods and services cross- border.
It is important to note that AI systems in the Union are subject to relevant product safety legislation that provides a framework protecting consumers against dangerous products in general. Besides, the Regulation does not seek to affect the application of existing Union law governing the processing of personal data or protection of fundamental rights to private life and the protection of personal data. The framework applies mainly to:
- providers placing on the market/putting into service AI systems in the EU, regardless of the providers’ location;
- deployers of AI located within the EU;
- providers and deployers of AI located outside the EU, where either Member State law applies by virtue of public international law or the output produced by the system is used in the EU;
- providers placing on the market/putting into service AI systems outside the EU, whereas the provider or distributor of such systems is located within the EU;
- importers and distributors of AI systems as well as authorised representatives of providers, who are located in the EU;
- affected persons that are in the EU and whose health, safety or fundamental rights were impacted by the AI system.
In conclusion, the legal framework applies if you develop, deploy, or use AI system in the EU. It doesn’t matter whether you locate inside or outside the EU. However, it doesn’t apply when you use AI for private or non-professional purposes.
Risk categories of the AI systems
At the stage of development of a certain AI system, the provider shall assess the potential risks that might cause unforeseeable problems primarily to the public, but not to the provider. Therefore, the AI Act simplifies the procedure of such independent risk assessment by providing a range of specific classifications. The AI Act follows a risk-based approach and establishes obligations for providers, as well as users (deployers) and other parties depending on the level of risk the AI system can generate. There are 4 categories of risk: prohibited practices or unacceptable risk, high risk, limited risk, or minimal risk AI systems.
Limited and minimal risk AI system
Limited risk means that the certain AI systems may cause specific risks of manipulation. In this case, such AI systems shall meet specific transparency obligations, which apply for AI systems that:
- interact with natural persons;
- detect emotions/shape associations based on biometric data;
- generate/manipulate image, audio, or video content (‘deep fakes’).
The main obligation that arises from the Regulation to the providers of limited-risk AI systems is that natural persons must be notified that they are engaging with an AI system, for example, by providing a disclaimer. This gives to the natural persons an opportunity to decide upon whether they want to use the AI system in such circumstances or not. It is important to note that generating or manipulating the content is subject to exceptions for legitimate purposes (law enforcement or freedom of expression). There are no other requirements for the providers of limited risk AI systems to gain access to the EU market. The limited risk AI systems include AI chatbots, inventory/management systems, customer and management segmentation systems, emotion recognition and biometric categorisation and systems generating deep fake or synthetic content.
Minimal risk AI systems pose minimal or no risk to the natural persons’ fundamental rights. Many of such systems are already in exploitation around the world and most of them fall into this risk category. There are no obligations or restrictions to minimal risk AI systems, so the providers are free to develop such systems and put them into market. Minimal risk AI systems include, for instance, AI-enabled video and computer games and spam filters.
High-risk AI systems
Classification rules
High-risk AI systems should only be placed on the Union market, put into service, or used if they comply with certain mandatory requirements. The classification of the AI system as high-risk is based on its’ intended purpose, area of performance and existing harmonisation legislation. Pursuant to the Article 6 of the AI Act, the high-risk AI systems are classified into two categories:
- A safety component of a product covered by the EU harmonisation legislation listed in Annex II requiring third-party conformity assessment;
- A product covered by the EU harmonisation legislation listed in Annex II requiring third-party conformity assessment.
Annex II of AI Act covers a wide range of products, such as toys, lifts, equipment, and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
Furthermore, AI systems intended to be used for a specific purpose, i.e. falling under at least one critical area or use case listed in Annex III, shall also be considered as a high-risk only, if they pose a harmful risk to the health, safety, fundamental rights, or the environment. You can find all the critical areas and the intended purposes of high-risk AI systems under Annex III in a table below.
Area | Intended purpose |
---|---|
Biometric and biometrics-based systems | Biometric identification of natural persons or to make inferences about personal characteristics on the basis of biometric or biometrics- based data, including emotion recognition systems (except prohibited practices). |
Management and operation of critical infrastructure | Safety components in the management and operation of road traffic and the supply of water, gas, heating, electricity and critical digital infrastructure. |
Education and vocational training |
|
Employment, workers management and access to self-employment |
|
Access to and enjoyment of essential private services and public services and benefits (healthcare services, housing, electricity, heating/cooling, internet etc.) |
|
Law enforcement |
|
Migration, asylum, and border control management |
|
Administration of justice and democratic processes | Assistance in researching, interpretation of facts, application of law based on concrete facts, use for dispute resolution |
This list of high-risk areas and cases of use is non-exhaustive due to the right of the European Commission to amend adding or modifying the Annex III, if a certain AI system pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law. The probable or actual risk itself should be equivalent to or greater than the risk or impact posed by the high-risk AI systems already referred to in Annex III. Significant risk of harm combines the assessment of two main elements: (a) severity, intensity, probability of occurrence and duration level of the risk; and (b) scope of influence.
Providers of high-risk AI systems, that consider their system does not pose a significant risk of harm, shall inform the national supervisory authorities by submitting a reasoned notification. The procedure for classifying the AI system to get a right to freely place the AI system on the market shall meet the following steps:
- one-page explanation, including the intended purpose of the AI system and the reason of harmlessness for all potential outcomes1;
- notification sall be done preferably at the development stage as fast as possible;
- in case of misclassification, the substantiated objection shall be provided within three months.
Requirements for high-risk AI systems
AI systems classified as ‘high-risk’ must meet the mandatory requirements set out in Chapter 2 of the AI Act before, as well as after putting them on the market. This is necessary in order to proceed with the mandatory conformity assessment and registration. It should be noted that some specific requirements can be found throughout the entire Regulation. High risk AI system shall comply with the following key requirements:
- Risk management system. High-risk AI systems are required to have an appropriate risk management system throughout the entire AI systems’ lifecycle. These systems must identify, analyse, estimate, and evaluate potential risks based on the adopted risk management measures and data gathered from the post-market monitoring system. The measures should be aimed to eliminate, reduce, or mitigate the identified risks. Residual risks must be reasonably judged, accepted, and communicated to deployers.
- Data and data governance. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality standards as far as this is technically feasible. There should be a special approach and attention to the data biases.
- Technical documentation. Documentation of high-risk AI systems shall prove compliance with requirements and contain a general description (purpose, developer, date, version of system and software, hardware, features, and instructions) and detailed description of the development (performed methods, logic of design choices, architecture, data requirements, assessment of human oversight, pre-determined changes and testing procedures)
- Record-keeping. High-risk AI systems shall enable automatic recording or ’logging’ to ensure traceability of functioning and facilitate monitoring of operations.
- Transparency and provision of information to deployers. High-risk AI systems shall enable deployers to interpret the system’s output and use it appropriately by providing the instructions for use.
- Human oversight. High-risk AI systems shall be developed with special human-machine interface tools, that enable natural persons to understand the circumstance of functioning or dis-functioning, interpret the output and decide upon the way of usage.
- Accuracy, robustness and cybersecurity. The level of accuracy shall be specified in the instructions of use. The robustness and cybersecurity shall be ensured through technical redundancy solutions and measurements.
Processing of data may cause significant risks to the fundamental rights. Compromise points out the importance of data protection measures in accordance with EU data protection law. Also, it is necessary to consider methods and standards to reduce the use of resources and energy through the monitoring and reporting due to the environmental impact.
In conclusion, before placing on the market or putting into service the high-risk AI system, the provider should ensure that all documentation, instructions, and systems are effective, appropriate and comply with the requirements provided above.
Prohibited artificial intelligence practices
Prohibition of certain unacceptable AI practices is necessary because they go against EU’s values or violate the natural persons’ fundamental rights.
The AI systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring, are considered to pose an unacceptable risk. The list of such practices, being also discriminatory and intrusive, was broadened by the latest amendments. According to the new compromise text of the Regulation, the prohibition of the following practices should also be considered:
- “real-time” remote biometric identification systems in publicly accessible spaces;
- “post” remote biometric identification systems, with the only exception being the ability for law enforcement to use the system for the prosecution of serious crimes and only after judicial authorization;
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location or past criminal behavior);
- emotion recognition systems in law enforcement, border management, workplace, and educational institutions;
- indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases, violating human rights and right to privacy.
Conformity assessment of high-risk AI systems
Providers of high-risk AI systems must ensure and verify that their high-risk AI systems successfully meet all provisions and standards through the written verification process of compliance. According to the EU law, the goal of conformity assessment process is to demonstrate the fulfilment of specific consumer protection and conformity with the AI systems’ requirements. In case of non-fulfilment, the implementation of remedial measures is required during the process. The conformity assessment focuses on procedural aspects, such as the assessment of compliance with the harmonized standards, technical documentation requirements, quality management system and post-market monitoring system. Also, conduction of conformity assessment depends on the type of product and the level of risk it may pose to the public.
Prior to conformity assessment, providers shall already have all technical documentation referred in the requirements above and a quality management system in the form of written policies, procedures, or instructions. The quality management system shall include:
- techniques, procedures and actions to be used for the design, development and quality control;
- examination, test and validation procedures to be carried out before, during and after the development;
- technical specifications;
- systems and procedures for data;
- risk management system;
- setting-up, implementation, maintenance of a post-market monitoring system;
- reporting methods of serious incidents and of malfunctioning;
- handling of communication with relevant competent authorities;
- systems and procedures for record keeping of all relevant data;
- resource management;
- accountability framework.
A harmonised standard is a European standard developed by a recognised European Standards Organisation, following a request from the European Commission2. Harmonised standards facilitate and simplify the technical side of providers’ conformity assessment and enable them to ensure security, privacy, transparency, data protection and accessibility of high-risk AI system. If there is no reference to harmonised standards already published in the Official journal of the European Union related to the applicable requirements, the European Commission my adopt common specifications based on the requirements for high-risk AI systems. However, the providers can justify the compliance through the adoption of equivalent technical solutions in respect of the requirements only if the compliance with common specifications can’t be achieved. High-risk AI systems are also considered to be in compliance after being tested on data related to the specific geographical, behavioral, contextual and functional setting.
There are two variations for conducting the conformity assessment: self-assessment (providers’ assessment) or risk assessment by a qualified independent third-party referred to as a “Notified Body”. In other words, Regulation allows the provider to choose between one of the following procedures:
- the conformity assessment procedure based on internal control;
- the conformity assessment procedure based on assessment with the involvement of a notified body (external control).
Internal control requires the provider to verify that the established quality management system is in compliance with the requirements, examine the information contained in the technical documentation and verify that the design and development process of the AI system and its post-market monitoring is consistent with the technical documentation.
External control is also based on assessment of quality management system and assessment of technical documentation. The external control should be foreseen for the AI systems intended to be used for the remote biometric identification, or to make inferences about personal characteristics of natural persons based on biometric or biometrics-based data, including emotion recognition systems, to the extent these AI systems are not prohibited. However, according to the compromise amendments, there should be more potential opportunities for the application of third-party conformity assessment due to the complexity of the high-risk AI systems. In case of conformity assessment in relation to described areas, the provider shall provide an application to the notified body for the examination of quality management system, as well as technical documentation.
Data for examination of the quality management system | Data for control of the technical documentation |
---|---|
|
|
The application shall be assessed and examined by the notified body. Upon the examination and control of application, the notified body notifies the provider of the decision, which shall contain all the conclusions and the reasoned assessment decision. The notified body issues an EU technical documentation assessment certificate, whereas the approved quality management system is subject to periodic surveillance (audits and reports). The certificate shall be valid for the period it indicates, which shall not exceed four years.
If the high-risk AI system complies with the requirements, provider shall draw up a written EU declaration of conformity and affix the CE marking regardless of the conformity assessment type. Although, in case of CE marking after external control the identification number of the responsible notified body should be affixed by the body itself or, under its instructions, by the provider’s authorised representative. EU declaration of conformity reflects the providers’ responsibility for compliance with the high-risk AI system requirements, which shall contain the following elements:
- AI system name and type and any additional unambiguous reference allowing identification and traceability of the AI system;
- Name and address of the provider or, where applicable, their authorised representative;
- A statement that the EU declaration of conformity is issued under the sole responsibility of the provider;
- A statement that the AI system in question is in conformity with this Regulation and, if applicable, with any other relevant Union legislation that provides for the issuing of an EU declaration of conformity;
- Where an AI system involves the processing of personal data, a statement that that AI system complies with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680;
- References to any relevant harmonised standards used or any other common specification in relation to which conformity is declared;
- Where applicable, the name and identification number of the notified body, a description of the conformity assessment procedure performed and identification of the certificate issued;
- Place and date of issue of the declaration, signature, name, and function of the person who signed it as well as an indication for, and on behalf of whom, that person signed.
The CE marking is subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008 and shall consist of the official initials ‘CE’. In general, the CE marking shall be affixed visibly, legibly, and indelibly only by the provider or his authorised representative (with the exception for notified body) to products that fall under specific harmonisation legislation or other Union legislation which also provides for the affixing of the CE marking before placing the AI system to the market. Also, it should be the only marking that reflects the provider’s responsibility for the conformity, whereas the additional marking indicating a special risk of use is allowed. Digital CE marking is possible to affix, if there is an easy access to it via interface of AI system, code, or other electronic means.
Registration of high-risk AI systems
The high-risk AI systems must be registered in the EU database before putting it into service in order to ensure transparency for the public and support the development of artificial intelligence field. The public EU database will be set up, controlled, and maintained by the Commission. It is recommended to get acquainted with the data that shall be entered into the EU database for the providers before the EU database and the AI Act itself with the compromise amendments enter into force. It should be noted that the provider can proceed to registration only after the performed conformity assessment with necessary documentation. If the developer and subsequently the provider decides to put the high-risk AI system to the EU market, the following information shall be provided and kept up to date to be registered:
- Name, address and contact details of the provider;
- Where submission of information is carried out by another person on behalf of the
- provider, the name, address and contact details of that person;
- Name, address and contact details of the authorised representative, where applicable;
- AI system trade name and any additional unambiguous reference allowing
- identification and traceability of the AI system;
- A simple and comprehensible description of the intended purpose, components and functions supported through AI and a basic explanation of the logic of the AI system
- Where applicable,the categories and nature of data likely or foreseen to be processed by the AI system;
- Status of the AI system (on the market/ in service; no longer placed on the market/in service, recalled);
- Type, number and expiry date of the certificate issued by the notified body and the name or identification number of that notified body, when applicable;
- A scanned copy of the certificate, when applicable;
- Member States in which the AI system is or has been placed on the market, put into service or made available in the Union;
- A copy of the EU declaration of conformity;
- URL for additional information (optional).
Sub-Services for Responsible AI Licenses
Responsible AI Licenses (RAIL) empower developers to control and restrict the usage of their AI technology to prevent misuse and harmful applications. These licenses include specific behavioral-use clauses that grant permissions for designated use-cases while restricting others. If a license allows derivative works, RAIL Licenses mandate that any downstream derivatives (including use, modification, redistribution, and repackaging) comply with the original behavioral-use restrictions.
Ethical Considerations Policy
Ensure ethical and responsible AI usage across all scenarios with Juscutum’s expertise. We help create policies that align with ethical standards and promote responsible AI implementation, ensuring compliance with the EU AI Act.
Data Privacy Policy
Guarantee data privacy and confidentiality for all data used and generated by AI systems. Our services ensure your AI operations are in full compliance with data protection regulations, safeguarding personal information and maintaining user trust.
Bias Mitigation
Implement effective measures to identify and reduce biases in AI decision-making processes. Juscutum provides strategies and tools to ensure your AI systems are fair, impartial, and adhere to ethical guidelines, minimizing discriminatory outcomes.
Transparency
Achieve transparency in AI operations and decision-making processes. We help establish clear, understandable, and accessible policies that ensure stakeholders can trust and verify the actions of AI systems, promoting accountability.
Self-Regulation Assessments
Adopt proactive measures to ensure responsible use of AI and Machine Learning (ML) technologies. Juscutum conducts thorough self-regulation assessments, helping organizations adhere to best practices and legal requirements under the EU AI Act, fostering responsible AI innovation.
Framework for notified bodies
This information is dedicated to the legal persons or organisations, who want to perform the conformity assessment as a notified body under the AI Act. Notified body is a conformity assessment body that performs third-party conformity assessment activities, including testing, certification, and inspection. In order to become the notified body, the organization should be designated by the national notifying authority of the Member state they are established after applying for notification. The application shall include a description of the conformity assessment activities, module or modules of conformity assessment and the artificial intelligence technologies for which the conformity assessment body claims to be competent, as well as by an accreditation certificate. In Estonia, the certificate can be issued by Estonian Centre for Standardisation and Accreditation.
Furthermore, there is a range of requirements that the notified bodies must follow. It is clear, that the notified bodies’ main obligation is to verify the conformity of high-risk AI system, but they shall also satisfy the organizational, management, recourse, process and minimum cybersecurity requirements to carry out the duties. The latest shall be performed independently form the provider by the employees with sufficient knowledge who are not connected to the provider before and after providing such services in 12-month period equally. Notified bodies shall have applicable procedures and internal competences (availability of sufficient administrative, technical and scientific personnel) for their conformity assessment activities and participate in coordination activities.
There are two notifying authorities in Estonia:
- Consumer Protection and Technical Regulatory Authority that covers the notification of the most areas under applicable legislation;
- Health board that operates under the legislation related to medical devices.
As regards the procedural aspects of notification, the organisation shall consider the following steps:
- Submit an application for notification to the notifying authority;
- The notification of a notified body will be sent by the notifying authority to the Commission and the other Member States via NANDO3(New Approach Notified and Designated Organisations);
- The notification takes effect after a notification email from NANDO has been sent to the Commission and the other Member States and published on the NANDO web site.4
After completing the procedure of notification, the Commission shall assign one identification number to a new notified body regardless of the amount of Union acts the body is notified under. If the notified body does no longer fulfill the obligations and requirements in relation to conformity assessment, the Commission may start an investigation based on the information provided from national authority and take corrective measures, such as suspension or withdrawal of the notification.
Penalties
The AI Act lays down some rules on penalties and administrative fines for infringements related to non-compliance with the requirements, obligations, or prohibitions. It must be noted that the compromise lays down new sums compared to the original Act. Since the compromise is approved, the following illustrates the cases of non-compliance and the estimated penalties according to the latest compromise text:
- Non-compliance with the prohibitions – up to 40 million euros or 7% of turnover;
- Non-compliance with the requirements for data and data governance – up to 20 million euros or 4% of turnover;
- Infringement to obligations or other requirements – up to 10 million euros or 2% of turnover;
- Supplying incorrect, incomplete, or misleading information – up to 5 million euros or 1% of turnover.
The penalties shall be decided according to the general principle of effectiveness, dissuasiveness, and proportionality of performed offence or actions. The individuality, nature, duration or character of offence can impact the size of the penalty.
1 The standardised template will be developed by the Commission in the near future.
2 References of harmonised standards and of other European standards published in the OJEU are available on the official website of the European Union: https://single-market-economy.ec.europa.eu/single-market/european-standards/harmonised-standards_en
3 The electronic list of notified bodies. https://ec.europa.eu/growth/tools-databases/nando/
4 Blue Guide – guidance on the application of all aspects of the single market for products including the role and organisation of notified bodies.
to our news & insights
Connect with our experts
Our experts will tell you how to do it as quickly and easily as possible.
By clicking the button, I confirm that I have read the privacy policy and consent to the collection and processing of my personal data in accordance with the GDPR rules.