European Parliament Approves Landmark AI Act: Balancing Innovation and Fundamental Rights

Earlier this week, the European Parliament passed the first-ever Artificial Intelligence (AI) Act. Garnering the approval of 85 percent of lawmakers, the Act sets out to address risks associated with AI while ensuring that AI systems continue to prioritize privacy, human dignity, and basic rights. This legislation fits into a broader strategy to promote trustworthy AI development across the EU, alongside the AI Innovation package and the Coordinated Plan on AI.

The Act pinpoints certain unacceptable uses of AI systems, especially those that could infringe upon fundamental rights and privacy—such as exploiting online facial images or security camera footage to establish facial recognition databases. Moreover, it classifies the risk associated with different AI systems by their intended use and functions. High-risk systems will be subjected to rigorous obligations, including adequate risk assessment and proper human oversight. Examples of such systems are resume-sorting software for hiring, migration and asylum management, and the administration of justice.

There are also guidelines for “light-risk” systems such as ChatGPT chatbots. For these, the Act necessitates transparency in AI-generated content to ensure users are fully informed. This transparency would extend to clearly marking artificial or manipulated images, audio, or video content—often referred to as “deepfakes.” This issue was brought to the fore during elections, where there are increasing concerns that deceptive content can be used to manipulate voters.

The European AI Office, established in February 2024, oversees the enforcement and implementation of the AI Act in cooperation with member states, providers, and deployers of AI systems. Non-compliance penalties can amount to as much as 35 million Euros.

Subject to endorsement by the European Council, the Act is anticipated to become EU law by May or June. It will fully come into force two years after adoption and apply to both public and private entities inside and outside the EU, provided the AI system is sold in the EU market or impacts EU citizens. The Act has the potential to significantly influence global AI regulations, much like the EU’s General Data Protection Regulation (GDPR) in data protection. Conversely, Digital Europe, a digital trade association, has voiced concerns about the possibility of overregulation and the high costs of compliance for corporations.