EU AI Act – risks and application
On August 1, 2024, the European Artificial Intelligence Act (AI Act) came into force, marking the world’s first comprehensive regulation on artificial intelligence. Its goal is to limit AI processes that pose unacceptable risks, set clear requirements for high-risk systems, and impose specific obligations on implementers and providers.
To whom does the AI Act apply?
The legislative framework applies to both public and private entities within and outside the EU if the AI system is marketed in the Union or its use impacts individuals located in the EU. Obligations can apply to both providers (e.g., developers of resume screening tools) and those implementing AI systems (e.g., a bank that has purchased the resume screening tool). There are some exceptions to the regulation, such as activities in research, development, and prototyping, AI systems created exclusively for military and defense purposes, or for national security purposes, etc.
What are the risk categories?
The Act introduces a unified framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:
- Minimal risk: For most AI systems, such as spam filters and AI-based video games, the AI Act does not impose requirements, but companies can voluntarily adopt additional codes of conduct.
- Specific transparency risk: Systems like chatbots must clearly inform users that they are interacting with a machine, and certain AI-generated content must be labeled as such.
- High risk: High-risk AI systems, such as AI-based medical software or AI systems used for recruitment, must meet strict requirements, including risk mitigation systems, high-quality datasets, clear information for users, human oversight, etc.
- Unacceptable risk: AI systems that enable “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore prohibited.
When will the AI Act be fully applicable?
The EU AI Act will apply two years after entry into force on 2 August 2026, except for the following specific provisions:
- The prohibitions, definitions and provisions related to AI literacy will apply 6 months after entry into force, not later than 2 February 2025;
- The rules on governance and the obligations for general purpose AI become applicable 12 months after entry into force, not later than 2 August 2025;
- The obligations for high-risk AI systems that classify as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonisation legislation), apply 36 months after entry into force, not later than 2 August 2027.
What will be the benefits for companies from the introduction of this act?
Europe is taking significant steps to regulate artificial intelligence and promote investment in innovation and deep technologies. The European Innovation Council (EIC) plans to invest €1.4 billion in deep technologies and high-potential startups from the EU in 2025. This is stated in the EIC Work Programme for 2025, which includes an increase of €200 million compared to 2024. The goal is to foster a more sustainable innovation ecosystem in Europe.