The European Union recently released a draft of the General Purpose Artificial Intelligence (GPAI) model behavior guidelines, aiming to provide companies with a blueprint for compliance and to avoid hefty fines. The document is set to be finalized in May and outlines risk management guidelines, providing companies with a roadmap to adhere to regulations and avoid significant penalties.

GPAI refers to artificial intelligence that has been trained with a total computational power exceeding 10²⁵ FLOP. Companies expected to be bound by the EU guidelines include OpenAI, Google, Meta, Anthropic, and Mistral.

AI, artificial intelligence, robots, 2024d9dc94358d8e

The document addresses several core areas for GPAI developers: transparency, copyright compliance, risk assessment, and technology/governance risk mitigation. The regulation emphasizes the need for transparency in AI development and requires AI companies to provide information about the web crawlers used to train their models.

The risk assessment section aims to prevent cybercrime, widespread discrimination, and loss of control over AI. AI manufacturers should adopt a Safe Framework (SSF) to break down their risk management policies and adjust these policies according to systemic risks.

Companies that violate the Artificial Intelligence Act will face severe penalties, potentially incurring fines of up to €35 million (approximately $36.8 million) or up to 7% of their global annual profits, whichever is higher.

The EU invites stakeholders to submit feedback via the dedicated Futurium platform by November 28 to help refine the next draft. The rules are expected to be finalized by May 1, 2025.