Language selection

Search

Briefing Note: European Artificial Intelligence Act – Key changes to initial text

Purpose: For information.

Issue: On December 8, 2023, the European Council, Commission and Parliament reached agreement on amendments to the EU AI Act. After final amendments and negotiations, on February 2nd the Act received approval from the Council of Permanent Representatives of the EU. This note summarizes the main changes between the original text and the newly agreed upon version.

Overview:

This note is based on text of the AI Act released by the European Council, noting that the same text had been previously leaked by reporters. We have also reviewed updates about key features of the agreement from the European Council, European Commission, and European Parliament. It is possible that small amendments may still occur between this writing and approval of the text, but this is not expected.

Key Negotiated Amendments

Key amendments to the AI Act include:

  • Definition of AI system: The definition of AI system will be simplified and brought into line with the OECD definition (note: this is similar to the approach taken in the recent amendments to AIDA). The EU Commission has also been tasked with developing guidelines about the application of this definition.
  • National Security and Law Enforcement Exemptions: The AI Act will not apply to AI systems used for military, defense or national security purposes. Multiple carve-outs will also apply to law enforcement use of AI systems, including that law enforcement reporting of its use of high-risk systems will not be included in a public database of such uses, and that law enforcement will be permitted in exceptional circumstances to deploy high-risk AI systems that have not passed a conformity assessment.
  • Other exemptions: The AI Act would also not apply to AI systems used for the sole purpose of research and innovation, nor to non-professional use of AI tools.
  • General purpose AI systems / foundation models: General purpose AI systems will be subject to transparency requirements, including the creation of technical documentation and detailed summaries about the content used for training. High-impact general purpose AI systems or foundation models that meet certain conditions (broadly related to computing power, functionality, and number of users) will be subject to additional obligations, including conducting model evaluations, assessing and mitigating systemic risks, conducting adversarial testing, reporting to the EU Commission of serious incidents, and ensuring cybersecurity. Until harmonized EU standards are published, organizations can rely on codes of practice to meet these obligations. The Codes can be reviewed and approved by the AI Office (discussed later).
  • Prohibited AI practices: The AI practices prohibited by the AI Act are summarized as follows:
    • Deployment of subliminal, purposely manipulative or deceptive techniques that would materially distort a person or group’s behaviour by causing them to take an action they would not have otherwise taken, or that is likely to cause harm;
    • Exploitation of the vulnerability of a person or group based on age, disability, or social or economic situation;
    • Biometric categorisation of persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation (excepting to label lawfully obtained data and biometric categorization by law enforcement);
    • Social scoring based on social behaviour or personal characteristics;
    • Individual predictive policing (i.e using software to assess an individual’s risk for committing future crimes based on personal traits);
    • Untargeted scraping of internet or CCTV for facial images to build-up or expand databases;
    • Emotion recognition in the workplace and education institutions, unless for medical or safety reasons (i.e. monitoring the tiredness levels of a pilot).
  • Remote Biometric Identification: The use of “real-time” biometric identification systems for the purpose of law enforcement is also a prohibited practice unless for the following purposes:
    • Targeted searches for victims (abduction, trafficking, sexual exploitation);
    • Prevention of a specific, substantial, and imminent threat to the life or physical safety of a person, or of a genuine and present or genuine and foreseeable threat of terrorist attack; and,
    • Localization and identification of a person suspected of committing a crime set out in regulation and punishable by at least four years in detention (murder, kidnapping, rape, etc.).
    Such uses would be subject to prior authorization by a judicial authority, and preceded by a fundamental rights impact assessment. They would also be subject to transparency requirements, such as reporting to the EU Commission. Post-remote (i.e. not real-time) biometric identification would also generally require judicial authorization.
  • High-risk AI systems: High-risk AI systems remain listed in an Annex to the AI Act. Within that Annex, AI systems used to influence the outcome of elections and voter behaviour have been added as a ‘high-risk’ system, while social media recommendation systems were removed as high-risk. It is also now clarified that AI systems within the listed domains are not high-risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making.
  • Fundamental rights impact assessment: AI systems in the public sector, and those used to assess creditworthiness or life/health insurance pricing, will be subject to a fundamental rights impact assessment on first use. Criteria for this assessment are set out in the Act, and the AI Office (discussed later) will be tasked with developing an automated tool to simplify this process.
  • SME Support: The AI Act will promote the creation of regulatory sandboxes to support the development of AI systems by small- and medium-sized enterprises (SMEs), and maximum penalties may be reduced where the offender is an SME.
  • Oversight: Governance of the AI Act will include an AI Office (advised by a panel of independent experts) within the EU Commission to oversee general purpose AI models, foster standards and testing practices, and enforce common rules within member states; and an AI Board (comprised of representatives of member states and advised by an advisory forum of stakeholders across multiple roles) to act as a coordination platform. Note also that a complaint mechanism for individuals is envisioned.
  • Penalties: Maximum penalties have risen to €35M or 7% of annual turnover (from €30M / 6%) for the highest category of non-compliance and lowered to €7.5M or 1.5% of annual turnover (from €10M / 2%) for the lowest category of non-compliance (failing to provide accurate information).
  • Coming into force: Most provisions will be applicable 2 years after the Act comes into force (20 days after being published in the EU Official Journal), except rules on prohibited AI systems (applicable after 6 months) and general-purpose AI systems (12 months). AI systems already deployed must be made compliant between 2 and 4 years after coming-into-force, depending on the nature of the system.

Approval:

Prepared by
Vance Lockton, Senior Technology Policy Advisor
Date 13-12-2023
Revisions 30-01-2024

Approved by Director and/or Executive Director
Lara Ives, Executive Director, Policy, Research and Parliamentary Affairs Directorate (PRPA)
Date December 15, 2023 / January 30, 2024

Approved by Deputy Commissioner
Gregory Smolynec, Deputy Commissioner, Policy and Promotion
Date

Approved by Privacy Commissioner
Philippe Dufresne, Privacy Commissioner
Date

DISTRIBUTION: Commissioner; Deputy Commissioner, Compliance; Deputy Commissioner, Policy & Promotion; General Counsel; PRPA.

Date modified: