Skip to main content

EU Parliament Unveils Landmark AI Act, Setting Global Regulatory Standard

The European Parliament has officially approved the landmark Artificial Intelligence Act, establishing the world's first comprehensive legal framework for AI and positioning the EU as a global leader in fostering trustworthy, human-centric artificial intelligence. This groundbreaking legislation employs a risk-based approach, banning harmful practices like social scoring and real-time biometric identification while imposing stringent requirements on high-risk applications, a move expected to significantly influence AI regulation worldwide.

EU Parliament Unveils Landmark AI Act, Setting Global Regulatory Standard

The European Parliament today gave its final approval to the landmark Artificial Intelligence Act, establishing the world's first comprehensive legal framework for AI, Reuters reported on Friday. This pivotal decision positions the European Union as a global leader in AI governance, aiming to foster trustworthy and human-centric artificial intelligence across its member states. [Source material]

The groundbreaking legislation employs a risk-based approach, meticulously categorizing AI systems according to their potential to cause harm. It imposes stringent requirements on high-risk applications while outright banning certain uses deemed unacceptable, as detailed by the European Union's official information platform.

This decisive move is widely expected to exert significant influence on AI regulation worldwide, potentially creating a "Brussels Effect" where EU standards become de facto global norms, according to analysis from the Brookings Institution.

Among the explicitly prohibited AI practices are harmful AI-based manipulation, social scoring systems, and real-time remote biometric identification in public spaces, as outlined by the European Union and Intellias. These bans underscore the EU's commitment to protecting fundamental rights and public safety.

Conversely, high-risk AI applications, such as those used in critical infrastructure, healthcare, and law enforcement, will face rigorous obligations. These include mandatory risk assessments, high-quality data sets, and robust human oversight, the European Parliament noted.

While the Act's final approval occurred today, its implementation follows a phased approach, with various provisions becoming applicable between February 2025 and August 2027, according to eyreact and Ardion. This staggered timeline allows businesses and authorities time to adapt to the new regulations.

Enforcement of the Act will be overseen by the newly established European AI Office, alongside national market surveillance authorities, as detailed by the European Union and Orrick. These bodies will ensure compliance and have the power to impose substantial fines for violations.

  • The legislative journey of the EU AI Act has been extensive, commencing with the European Commission's proposal in April 2021. Following protracted negotiations, a provisional agreement was reached in December 2023, leading to the European Parliament's vote in March 2024 and formal adoption by the Council in May 2024, eyreact reported. The Act was officially published in the EU's Official Journal on July 12, 2024, and entered into force on August 1, 2024, marking the start of its phased applicability.

  • The Act categorizes AI systems into four distinct risk levels: unacceptable, high, limited, and minimal, with an additional framework for general-purpose AI models. Unacceptable risk systems are banned outright, while high-risk systems face stringent requirements. Limited-risk applications primarily require transparency, and minimal-risk systems are largely unregulated, as explained by Intellias and the European Union. This tiered approach aims to balance innovation with safety.

  • Specific prohibited AI practices under Article 5 of the Act include AI systems that deploy subliminal techniques to distort behavior, exploit vulnerabilities of specific groups (like children or the disabled), and engage in social scoring. Also banned are untargeted scraping of facial images for databases, emotion recognition in workplaces or educational institutions, and certain uses of real-time remote biometric identification by law enforcement, according to the European Union and Trail.

  • Providers of high-risk AI systems must adhere to strict obligations before market placement. These include conducting adequate risk assessments, ensuring high-quality datasets to minimize discriminatory outcomes, maintaining logging capabilities for traceability, and providing detailed documentation. Furthermore, appropriate human oversight measures, high levels of robustness, cybersecurity, and accuracy are mandated, as specified by the European Union.

  • General-Purpose AI (GPAI) models, including large language models, are subject to specific regulations, with more stringent obligations for high-capability models posing systemic risks. Providers must comply with transparency requirements, technical documentation, and safety protocols. The European Commission has also encouraged voluntary Codes of Practice as an interim solution, with major industry players like OpenAI already adopting them, clarkemodet reported.

  • Reactions from stakeholders have been mixed. While some tech companies like IBM and Salesforce have welcomed the Act, viewing it as a step towards fostering trust and responsible AI, civil society organizations have expressed concerns. Groups like Amnesty International and the European Disability Forum argue that the Act does not go far enough in protecting fundamental rights, particularly in areas like policing and migration, according to iapp and CoCreations AI.

  • The Act's extraterritorial reach means it will impact non-EU companies offering AI products or services within the EU, compelling them to comply with its standards. This "Brussels Effect" could lead to European AI governance principles becoming a global benchmark, although some analyses, such as that from the Brookings Institution, suggest its influence might be more targeted than the GDPR's. Informatica noted that companies worldwide must now reckon with compliance.

  • The governance structure for the AI Act involves a combination of centralized and decentralized bodies. The European AI Office, established within the European Commission, holds exclusive jurisdiction over general-purpose AI models. National market surveillance authorities will handle most compliance investigations, while the European Artificial Intelligence Board will promote cooperation among member states, as detailed by the European Union and Orrick. Non-compliance can result in significant administrative fines.

Editorial Process: This article was drafted using AI-assisted research and thoroughly reviewed by human editors for accuracy, tone, and clarity. All content undergoes human editorial review to ensure accuracy and neutrality.

Reviewed by: Bridgette Jacobs

Discussion

0
Join the conversation with 0 comments

No comments yet

Be the first to share your thoughts on this article.

Back

Accessibility Options

Font Size

100%

High Contrast

Reading Preferences

Data & Privacy