menu
close

EU's Landmark AI Act Begins Enforcement Phase

On July 11, 2025, the European Union officially began enforcing key provisions of its comprehensive AI Act, marking a significant milestone in global AI governance. The regulations establish clear guidelines for AI development and deployment, with particular focus on general-purpose AI models and high-risk applications. This regulatory framework represents the world's first comprehensive legal approach to artificial intelligence as the technology becomes increasingly integrated across virtually every sector of the economy.
EU's Landmark AI Act Begins Enforcement Phase

The European Union has reached a pivotal moment in its regulation of artificial intelligence with the enforcement of key provisions of its AI Act beginning July 11, 2025. This marks the implementation of the world's first comprehensive regulatory framework for AI technologies.

The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. It sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. The Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package, the launch of AI Factories and the Coordinated Plan on AI. Together, these measures guarantee safety, fundamental rights and human-centric AI, while strengthening uptake, investment and innovation in AI across the EU.

The implementation follows a phased approach that began with the Act's entry into force on August 1, 2024. The Act's first substantive obligations began to apply in early 2025, and the current milestone—imposing sweeping obligations for General Purpose AI ("GPAI") models and new governance structures—takes effect on August 2, 2025. For AI developers, providers, and deployers—especially those operating across borders—this milestone marks a pivotal shift from preparation to implementation.

This phase activates the European AI Office and the European Artificial Intelligence Board ("EAIB"), which will oversee enforcement and coordination across member states. National authorities must also be designated by this date. GPAI model providers—particularly those offering large language models ("LLMs")—will face new horizontal obligations, including transparency, documentation, and copyright compliance. For GPAI models deemed to pose systemic risk, additional requirements such as risk mitigation, incident reporting, and cybersecurity safeguards will apply.

Despite industry pushback, the European Commission has maintained its implementation timeline. On July 3rd, 2025, Reuters reported that companies were calling for a pause in the provisions and getting support from some politicians. "To address the uncertainty this situation is creating, we urge the Commission to propose a two-year 'clock-stop' on the AI Act before key obligations enter into force," said an open letter sent to the European Commission by a group of 45 leading European companies. However, the European Commission denied this request and stated that it will continue with the implementation as scheduled.

While the rules for general purpose AI models take effect on August 2nd 2025, the powers for enforcing those rules start a year later (on August 2nd 2026). As of then, non-compliance attracts administrative fines of up to €15 million or 3% of global turnover (rising to €35 million / 7% for prohibited practices). The EU's regulatory approach aims to balance innovation with protection of fundamental rights, establishing a framework that may influence AI governance globally.

Source:

Latest News