On August 1, 2024, the EU's Artificial Intelligence Act officially came into effect, marking the arrival of a new era of regulation. This groundbreaking legislation sets clear standards and compliance requirements for AI applications within the EU. The implementation of this act signifies a significant step forward in AI governance for the EU and reflects its risk-based approach.

The act stipulates staggered compliance deadlines for different types of AI developers and applications. Although most provisions will not be fully applicable until mid-2026, some key provisions will begin enforcement six months later, including a ban on the use of certain AI technologies in specific circumstances, such as the use of remote biometric identification technology by law enforcement in public spaces.

AI, Artificial Intelligence, Robot

Image source note: The image was generated by AI, provided by the image licensing service Midjourney

According to the EU's risk management approach, AI applications are categorized into different risk levels. Most daily applications are classified as low or no risk and are therefore not subject to this regulation. However, applications that may pose potential harm to individuals and the public are classified as high risk, such as biometric identification, facial recognition, and AI-based medical software. Companies developing these high-risk AI technologies must ensure their products meet strict risk and quality management standards, including comprehensive conformity assessments, and may be subject to audits by regulatory agencies.

Additionally, technologies such as chatbots, classified as "limited risk," must also meet certain transparency requirements to ensure users are aware of their usage and to prevent potential misinformation or fraud.

The Artificial Intelligence Act also introduces a tiered penalty system. Companies violating the ban on high-risk AI applications may face severe penalties, up to 7% of their global annual turnover. Other violations, such as failure to fulfill risk management obligations or providing incorrect information to regulatory agencies, will result in varying degrees of financial penalties, up to 3% or 1.5% of global annual turnover.

For General Purpose AI (GPAI) technologies, the EU has also established specific regulations. Most GPAI developers will be required to meet minimal transparency obligations, including providing summaries of training data and adhering to copyright rules. Only the most powerful GPAI models, classified as potentially posing systemic risks, will need additional risk assessments and mitigation measures.

With the implementation of the Artificial Intelligence Act, the EU's AI ecosystem has entered a new chapter. Developers, businesses, and public sectors now have a clear compliance roadmap, which will foster innovation and development in the AI industry while ensuring applications meet ethical and safety standards.

However, challenges remain. Some specific regulations, particularly those for high-risk AI systems, are still being developed. European standardization bodies are actively involved in this process, with expected completion of related standards by April 2025.