European Union regulation | |
Preparative texts | |
---|---|
Commission proposal | 2021/206 |
The Artificial Intelligence Act (AI Act) is a proposed European Union regulation on artificial intelligence in the European Union. Proposed by the European Commission on 21 April 2021[1] and not yet enacted,[2] it would introduce a common regulatory and legal framework for artificial intelligence.[3]
Its scope would encompass all types of artificial intelligence in a broad range of sectors (exceptions include AI systems used solely for military, national security, research, and non-professional purpose[4]). As a piece of product regulation, it would not confer rights on individuals, but would regulate the providers of artificial intelligence systems, and entities making use of them in a professional capacity.[5]
The AI Act was revised following the rise in popularity of generative AI systems such as ChatGPT, whose general-purpose capabilities present different stakes and did not fit the defined framework.[6] More restrictive regulations are planned for powerful generative AI systems with systemic impact.[7]
The proposed EU Artificial Intelligence Act aims to classify and regulate artificial intelligence applications based on their risk to cause harm. This classification includes four categories of risk ("unacceptable", "high", "limited" and "minimal"), plus one additional category for general-purpose AI. Applications deemed to represent unacceptable risks are banned. High-risk ones must comply to security, transparency and quality obligations and undergo conformity assessments. Limited-risk AI applications only have transparency obligations, and those representing minimal risks are not regulated. For general-purpose AI, transparency requirements are imposed, with additional and thorough evaluations when representing particularly high risks.[7][8]
The Act further proposes the introduction of a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation.[9]
The AI Act is expected to have a large impact on the economy. Like the European Union's General Data Protection Regulation, it can apply extraterritorially to providers from outside the EU, if they have products within the EU.[5]
There are different risk categories depending on the type of application, and one specifically dedicated to general-purpose generative AI :
The Act regulates the entry to the EU internal market. To this extent it uses the New Legislative Framework, which can be traced back to the New Approach which dates back to 1985. How this works is as follows: The EU legislator creates the AI-act, this Act contains the most important provisions that all AI-systems that want access to the EU internal market will have to comply with. These requirements are called 'essential requirements'. Under the New Legislative Framework, these essential requirements are passed on to European Standardisation Organisations who draw up technical standards that further specify the essential requirements.[14]
As mentioned above, the Act requires that member states set up their own notifying bodies. Conformity assessments should take place in order to check whether AI-systems indeed conform to the standards as set out in the AI-Act.[15] This conformity assessment is either done by self-assessment, which means that the provider of the AI-system checks for conformity themselves, or this is done through third party conformity assessment which means that the notifying body will carry out the assessment.[16] Notifying bodies do retain the possibility to carry out audits to check whether conformity assessment is carried out properly.[17]
Under the current proposal it seems to be the case that many high-risk AI-systems do not require third party conformity assessment which is critiqued by some.[17][18][19][20] These critiques are based on the fact that high-risk AI-systems should be assessed by an independent third party to fully secure its safety.
In February 2020, the European Commission publishes "White Paper on Artificial Intelligence – A European approach to excellence and trust".[21] In October 2020, a debates between EU leaders take place. On 21 April 2021, the AI Act is officially proposed. On 6 December 2022, the European Council adopts the general orientation, which allows negotiations to begin with the European Parliament. On 9 December 2023, after three days of “marathon" talks, the Council and Parliament concluded an agreement.[22]
The AI Act is unlikely to take effect before 2025.[2] It's applicability will be progressive. AI applications deemed to present "unacceptable" risks should be banned 6 months after entry into force and provisions for general-purpose AI should become applicable 12 months after entry into force, and the AI Act should be fully applicable 24 months after entry into force.[23]