August 12, 2025 - The European Commission has initiated a formal investigation into OpenAI's GPT-5 over concerns about its deepfake generation capabilities, marking the first major regulatory action against a foundation model under the EU AI Act. This development matters profoundly as it tests the enforcement mechanisms of the world's most comprehensive AI legislation just days after GPT-5's global launch. Key facts include the probe focusing specifically on the model's multimodal video synthesis features and the Commission demanding full technical documentation within 30 days to assess compliance with deepfake watermarking requirements.
Technical analysis by the Commission's AI Office indicates GPT-5's enhanced video generation capabilities may bypass existing detection systems, including the recently developed universal deepfake detector achieving 98% accuracy. The investigation examines whether OpenAI implemented adequate safeguards against non-consensual synthetic media creation, particularly given the model's improved temporal coherence in video generation. Thierry Breton, European Commissioner for the Internal Market, stated: 'We cannot permit generative AI to become an unregulated vector for disinformation when democratic processes hang in the balance.' This assessment follows the Commission's official notice published yesterday detailing specific technical concerns about GPT-5's architecture.
This probe arrives amid intensifying global scrutiny of generative AI's societal impacts, highlighting the growing tension between rapid model deployment and regulatory frameworks. It underscores how the EU AI Act's high-risk classification for deepfake technologies is moving from theory to enforcement, potentially setting precedents for similar legislation in the US and Asia. The investigation also reflects broader industry shifts toward mandatory AI auditing protocols and the integration of detection capabilities directly into foundation models – a trend accelerated by recent breakthroughs in deepfake identification. Crucially, it tests whether voluntary industry commitments can withstand formal regulatory pressure in the generative AI arms race.
Our view: While regulatory vigilance is essential, this investigation must balance legitimate security concerns with avoiding overreach that stifles beneficial innovation. The Commission should prioritise collaborative technical solutions with developers rather than purely punitive measures, focusing on standardised watermarking frameworks that work across platforms. This case will determine whether AI governance can evolve at the same pace as the technology it seeks to regulate.
beFirstComment