EU Agrees to Simplify AI Regulation
EU co-legislators reached a provisional agreement on May 7 to reform the AI Act, part of the Digital Omnibus simplification package, balancing delayed high-risk AI compliance deadlines with a ban on AI systems generating non-consensual sexually explicit content and child sexual abuse material.
Objective Facts
On May 7, 2026, after nine hours of negotiations, EU co-legislators reached a provisional agreement to reform the AI Act as part of the Digital Omnibus on AI simplification package. A looming compliance deadline on high-risk AI systems placed significant pressure on reaching an agreement in time. The Commission had proposed adjusting the timeline for applying rules on high-risk AI systems by up to 16 months, so that rules start to apply once the Commission confirms the needed standards and tools are available. AI systems involving biometrics, critical infrastructure, education, employment, law enforcement and border management will now have a December 2, 2027 deadline, while AI systems embedded in products will have an August 2, 2028 deadline. The agreement also includes a ban on nudifier apps providing tools to protect fundamental rights and human dignity. Left-leaning civil rights organizations view the deal as a rollback of protections, while industry groups and centrist politicians frame it as necessary simplification to enable European AI innovation.
Left-Leaning Perspective
A coalition of privacy and civil rights organizations including ARTICLE 19 and over 40 partner groups argue the Omnibus is a rollback of hard-won protections dressed as simplification, and that weakening the AI Act before its core provisions have even come into force risks dismantling one of Europe's most distinctive regulatory assets. ARTICLE 19 argues the AI Omnibus 'effectively weakens the AI Act and leaves people in the EU without adequate and timely protection from high-risk AI systems, such as biometric identification or AI use in schools'. BEUC, the European Consumer Organisation, regrets that the final agreement 'creates a less safe digital environment for consumers as it delays key provisions in the AI Act and creates dangerous loopholes in the scope of the law,' while rolling back 'key consumer protections for uncontrolled processing of previously protected personal data while disproportionately expanding regulatory privileges to larger companies'. BEUC is concerned that the deal allows the EU to limit core AI Act obligations and exempt certain systems through delegated acts, which could risk further deregulation in the future. Civil society warns that proposed reforms to redefine personal data 'will weaken protections under the law and potentially allow Big Tech to harvest more personal data for AI training,' and that 'special carveouts for AI could undermine the core purposes of the GDPR'. Backed by powerful corporations, the Commission's 'Digital Omnibus' threatens to weaken EU digital rules that were once seen as global benchmarks for privacy and AI, playing on a false dichotomy between regulation and innovation championed by Big Tech, who seek a rules-free environment that prioritizes profit. Progressive voices note that while the nudifier ban was a necessary win, it obscures the broader deregulatory trajectory of the deal.
Right-Leaning Perspective
German Chancellor Friedrich Merz aggressively pushed for industrial AI exemptions, telling German CEOs 'I will push to ease the regulatory burden in the EU on AI and, where possible, to exempt industrial AI from the current regulatory straightjacket that is too tight for AI within the European Union'. The pressure from industry and from Chancellor Merz's CDU nearly caused the entire negotiation strategy to collapse. In a significant win for the German government, industrial AI is getting a big carve-out from the AI Act's requirements, with AI for machinery products only needing to abide by separate, pre-existing sectoral safety rules. Arba Kokalari, the European Parliament's rapporteur for the Internal Market committee, stated 'We are not weakening any safety rules; we are clarifying the rules for companies in Europe' and emphasized that 'companies should not be regulated twice for one thing'. Kokalari emphasized that 'if Europe wants to be competitive, we must increase investment and make it easier to use AI, not punish companies who introduce innovative AI features in safe products'. Europe's own tech startups and companies are likely to benefit from regulatory easing, as Germany, France, and the Netherlands are among the world's biggest investors in their AI sectors, and industry actors in these countries have been vocal advocates for simplification measures. The European Commission estimates that the Digital Omnibus measures could save European businesses up to EUR 5 billion in administrative costs by 2029. Center-right voices frame the agreement as pragmatic: The bloc is attempting a delicate balancing act, protecting its citizens from the most egregious harms of AI while ensuring its industries aren't strangled by the very rules meant to guide them. However, Big Tech's prominent Brussels lobbyists, the Computer and Communications Industry Association Europe, said the deal 'misses a clear opportunity to deliver genuine simplification in key areas', suggesting even the right has mixed views.
Deep Dive
The EU AI regulation simplification agreement represents a classic regulatory compromise under acute deadline pressure. The central goal was to postpone application of high-risk AI requirements before relevant standards had even been finalized, a situation widely deemed unworkable for industry. The Parliament is strongly inclined to prioritize industry's concerns over regulatory burden, and Germany is on high alert to maximize the impact of the Omnibus wave in support of its own struggling companies. This structural imbalance gave the right (particularly German interests) significant leverage in negotiations. What each side gets right and overlooks: The right correctly identifies that the supporting framework (harmonised standards, designated notified bodies, guidance documents, conformity assessment procedures) will not be ready by the original August 2, 2026 deadline, making postponement practically necessary. The left correctly notes that the deal allows the EU to limit core AI Act obligations through future delegated acts, which could risk further deregulation—a legitimate concern about institutional drift. However, the left underestimates that the risk-based architecture of the AI Act remains intact, and the underlying obligations on high-risk systems are not being softened. The right overlooks that weakening the Act before core provisions have even come into force risks dismantling one of Europe's most distinctive regulatory assets. What to watch: The AI Omnibus is only a precursor to the more consequential Data Omnibus, and the GDPR simplification will carry far-reaching implications for fundamental rights, with a Parliament strongly inclined to prioritize industry concerns and Germany maximizing the impact. The machinery carve-out sets a precedent for sectoral exemptions that could expand. If the deal's political argument for delay has been exhausted and a second postponement would terminate the Brussels-effect leverage that makes the AI Act globally relevant, enforcement rigor after December 2027 will be critical to validating the compromise.