AI Regulation by the EU Now in Full Force
Meta's Chief of Global Affairs, Joel Kaplan, has announced that the tech giant will not endorse the European Union's (EU) Code of Practice for general-purpose AI models. This decision comes as the EU continues to tighten its regulations on AI, with the implementation of the EU AI Act last year.
The EU AI Act sets distinct risk classifications for various AI models, including those posing systemic risks. Manufacturers of such AI models are obligated to conduct comprehensive evaluations to identify potential systemic risks. General-purpose AI models and those identified as posing systemic risks must comply with the Act's provisions by August 2.
The Act imposes strict obligations on companies developing AI models associated with systemic risks. These companies are required to identify, assess, and prevent such risks proactively, with responsibility for risk management measures, transparency, robust risk assessment frameworks, and adherence to regulatory oversight to mitigate potential systemic impacts.
Manufacturers are also required to document the adversarial testing performed during the mitigation of systemic risks. They must implement appropriate cybersecurity measures to safeguard against misuse or compromise of their AI systems. Non-compliance with the AI Act can result in substantial financial penalties, ranging from €7.5 million to €35 million.
The AI Act Explorer, a comprehensive guide for companies navigating these regulations, was launched on July 18. This tool helps companies ascertain their precise obligations under the Act. The release of the AI Act Explorer guidance precedes the August 2 deadline by approximately two weeks.
Notable AI models currently falling under this classification include OpenAI's GPT-4, OpenAI's o3, Google Gemini 2.5 Pro, Anthropic's more recent Claude models, and xAI's Grok-3. The European Union defines AI models presenting systemic risks as those developed using "greater than 1025 floating point operations (FLOPs)".
While Meta has decided not to endorse the EU's Code of Practice, other AI companies such as Mistral AI and OpenAI have committed to signing the Code of Practice, a voluntary mechanism that aligns with the binding regulations. Applications such as facial recognition systems and social scoring mechanisms fall under the unacceptable risk category and face a prohibition within the EU.
OpenAI recently launched the ChatGPT agent, which can utilize a virtual computer for executing multi-step tasks. Manufacturers are obligated to report serious incidents to both EU and national offices if they occur. The AI Act provides guidelines for balancing AI innovation with safety, marking a significant step forward in the regulation of AI technology in the EU.
Read also:
- Understanding Hemorrhagic Gastroenteritis: Key Facts
- Trump's Policies: Tariffs, AI, Surveillance, and Possible Martial Law
- Expanded Community Health Involvement by CK Birla Hospitals, Jaipur, Maintained Through Consistent Outreach Programs Across Rajasthan
- Abdominal Fat Accumulation: Causes and Strategies for Reduction