EU Delays “High-Risk” AI Rules Until 2027 Following Tech Industry Pushback

The European Commission has proposed delaying some “high-risk” provisions of the EU AI Act until December 2027, pushing back the original August 2026 timeline. The move, announced on Wednesday, November 19, 2025, is part of a broader regulatory simplification package aimed at reducing bureaucratic hurdles, responding to criticism from Big Tech companies, and enhancing Europe’s competitiveness in the global technology market.

This latest adjustment follows the EU’s previous watering down of environmental regulations after opposition from business groups and the U.S. government. European authorities insist that while some compliance deadlines are being postponed, the overall rules remain robust.


Scope of the “High-Risk” AI Delay

The delay affects AI applications in areas considered high-risk, including:

  • Biometric identification
  • Road traffic and autonomous vehicle systems
  • Utilities and essential service management
  • Job applications and professional exams
  • Healthcare services
  • Credit scoring and financial assessments
  • Law enforcement operations

In addition, the EU plans to simplify consent requirements for website cookies, part of a broader “Digital Omnibus” reform package that also addresses the General Data Protection Regulation (GDPR), the e-Privacy Directive, and the Data Act.

A European Commission official stressed:

“Simplification is not deregulation. Simplification means that we are taking a critical look at our regulatory landscape.”


Implications for Big Tech and AI Development

The proposed changes could provide companies like Alphabet (Google), Meta, OpenAI, and other AI developers with greater flexibility to use Europeans’ personal data for training AI models. By easing certain compliance requirements and delaying enforcement timelines, the EU aims to balance innovation with safety, while responding to concerns from the tech sector about operational complexity.

The Digital Omnibus proposal will still require debate and approval from individual EU member states before taking effect. While the delay eases immediate regulatory pressure, companies developing AI in Europe must still prepare for full implementation by 2027, particularly for applications in high-risk areas.


Conclusion

Europe’s decision to delay “high-risk” AI regulations underscores the challenge of balancing innovation with regulatory oversight. By providing additional time for compliance, the EU seeks to maintain its global AI competitiveness while keeping consumer protections intact. Tech firms now have a longer runway to align AI systems with European rules, though scrutiny will likely increase as the 2027 deadline approaches.

Leave a Reply

Your email address will not be published. Required fields are marked *