Posts

EU AI Act – risks and application

 

On August 1, 2024, the European Artificial Intelligence Act (AI Act) came into force, marking the world’s first comprehensive regulation on artificial intelligence. Its goal is to limit AI processes that pose unacceptable risks, set clear requirements for high-risk systems, and impose specific obligations on implementers and providers.

To whom does the AI Act apply?

The legislative framework applies to both public and private entities within and outside the EU if the AI system is marketed in the Union or its use impacts individuals located in the EU. Obligations can apply to both providers (e.g., developers of resume screening tools) and those implementing AI systems (e.g., a bank that has purchased the resume screening tool). There are some exceptions to the regulation, such as activities in research, development, and prototyping, AI systems created exclusively for military and defense purposes, or for national security purposes, etc.

What are the risk categories?

The Act introduces a unified framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:

  • Minimal risk: For most AI systems, such as spam filters and AI-based video games, the AI Act does not impose requirements, but companies can voluntarily adopt additional codes of conduct.
  • Specific transparency risk: Systems like chatbots must clearly inform users that they are interacting with a machine, and certain AI-generated content must be labeled as such.
  • High risk: High-risk AI systems, such as AI-based medical software or AI systems used for recruitment, must meet strict requirements, including risk mitigation systems, high-quality datasets, clear information for users, human oversight, etc.
  • Unacceptable risk: AI systems that enable “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore prohibited.

When will the AI Act be fully applicable?

The EU AI Act will apply two years after entry into force on 2 August 2026, except for the following specific provisions:

  • The prohibitions, definitions and provisions related to AI literacy will apply 6 months after entry into force, not later than  2 February 2025;
  • The rules on governance and the obligations for general purpose AI become applicable 12 months after entry into force, not later than 2 August 2025;
  • The obligations for high-risk AI systems that classify as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonisation legislation), apply 36 months after entry into force, not later than 2 August 2027.

What will be the benefits for companies from the introduction of this act?

Europe is taking significant steps to regulate artificial intelligence and promote investment in innovation and deep technologies. The European Innovation Council (EIC) plans to invest €1.4 billion in deep technologies and high-potential startups from the EU in 2025. This is stated in the EIC Work Programme for 2025, which includes an increase of €200 million compared to 2024. The goal is to foster a more sustainable innovation ecosystem in Europe.

 

What are the penalties for infringement of the EU AI Act?

 

Penalties for infringement

Member States will have to lay down effective, proportionate and dissuasive penalties for infringements of the rules for AI systems.
The Regulation sets out thresholds that need to be taken into account:

  •  Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
  • Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation;
  • Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request;
    For each category of infringement, the threshold would be the lower of the two amounts for SMEs and the higher for other companies.
    The Commission can also enforce the rules on providers of general-purpose AI models by means of fines, taking into account the following threshold:
  • Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the obligations or measures requested by the Commission under the Regulation.
    EU institutions, agencies or bodies are expected to lead by example, which is why they will also be subject to the rules and to possible penalties. The European Data Protection Supervisor will have the power to impose fines on them in case of non-compliance.

Our CEO, Dr. Galya Mancheva was a speaker at a seminar organized by BAIT, which took place on November 20, 2024

Our CEO, Dr. Galya Mancheva was a speaker at a seminar organized by the Bulgarian Association of Information Technology (BAIT), which took place on November 20, 2024, at Interpred. The  topic of the seminar was the  EU AI Act – the EU’s regulatory framework on artificial intelligence (Regulation (EU) 2024/1689).

The event was attended by nearly 20 representatives from 13 BAIT member companies, who learned about the definition of artificial intelligence, the deadlines for compliance with the new regulatory requirements, the levels of risk and the risk-based approach implied by the framework, as well as the concept of risk management in the context of these requirements.

Dr. Galya Mancheva also presented  AI advy, which helps companies meet European requirements and regulations related to artificial intelligence, particularly in the area of high-risk systems.

Europe regulates AI in order to boost investment in innovation and deep technologies

Europe regulates AI in order to boosts investment in innovation and deep technologies: European Innovation Council to invest €1.4 billion in deep technologies in 2025

Next year, the European Innovation Council (EIC) will boost deep technologies and high-potential start-ups from the EU with €1.4 billion. This is set out in the EIC Work Programme for 2025. The increase is €200 million compared to 2024 and aims to boost a more sustainable innovation ecosystem in Europe.

One of the main improvements to the programme is the EIC’s new scheme to expand the Strategic Technology Platform for Europe (STEP) – its budget is €300 million and will finance larger investments in companies aiming to bring strategic technologies to the EU market.

The remaining budget is distributed across 4 funding schemes:

EIC Pathfinder – for technology solutions with a technology readiness level of up to TRL 4 with the potential to lead to technological breakthroughs.

EIC Transition – an opportunity for consortia that have already achieved research results within the EIC Pathfinder or other Horizon 2020 and Horizon Europe programmes to turn them into innovations ready for market implementation.

EIC Accelerator – support for innovation projects in the final development phase.

The individual programmes are eligible for funding by research organisations, universities, SMEs, start-ups, manufacturing companies or sole traders, large companies, small mid-caps, etc.

Our CEO Dr. Galya Mancheva has provided an UpDate on EU AI Act on Bloomberg TV

Our CEO Dr. Galya Mancheva has provided today an UpDate on EuAiAct on Bloomberg TV.

Some of the insights are the following:

The ban of AI systems posing unacceptable risks will apply six months after the entry into force.

Codes of practice will apply nine months after entry into force.

Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

Dr. Galya Mancheva will be one of the speakers at the upcoming business forum EU AI Act: regulations and transformations in the industry

Dr. Galya Mancheva, founder and executive director of the company Ai advy, will be one of the speakers at the upcoming business forum EU AI act: regulations and transformations in the industry, organized by b2b Media.
The event will take place on November 1st at the Intercontinental Hotel. The one-day forum brings together business leaders and entrepreneurs who are transforming their organizations in line with the latest technology trends and regulations. It will provide an opportunity to follow trends in the industry, the latest EU regulations in the field, as well as intriguing European programs to develop business with.
Dr. Mancheva will participate with the topic “EU AI act: threat or opportunity for business transformation”. She builds her risk management and regulatory expertise over 10 years in the financial technology industry. For the past three years, he has been developing Ai advy, which helps companies comply with EU regulatory requirements regarding artificial intelligence. She has authored a number of scientific publications exploring the regulation since its announcement in 2021.
You can buy a ticket worth BGN 240. We provide an additional code for a 40% discount: Advy40
Need more experts? – Get 3 tickets and get a fourth free

Come to find out:
– What are the deadlines for the implementation of the EU AI Act
– How AI will change the business climate in Bulgaria and Europe
– Which are the new European innovation programs to apply to

Learn more: https://ai.b2bmedia.bg/

Article: https://b2bmedia.bg/news/nov-biznes-forum-postavq-na-fokus-regulaciite-v-ai-eto-kakvo-da-ochakvate-xepYL

Website of the event: https://ai.b2bmedia.bg/

 

EU AI Act: first regulation on artificial intelligence

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

Learn more about what artificial intelligence is and how it is used
What Parliament wants in AI legislation

Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.

Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Learn more about Parliament’s work on AI
Learn more about Parliament’s vision for AI’s future
AI Act: different rules for different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
Biometric identification and categorisation of people
Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

High risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

2) AI systems falling into specific areas that will have to be registered in an EU database:

Management and operation of critical infrastructure
Education and vocational training
Employment, worker management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control management
Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.

Transparency requirements

Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

Disclosing that the content was generated by AI
Designing the model to prevent it from generating illegal content
Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Content that is either generated or modified with the help of AI – images, audio or video files (for example deepfakes) – need to be clearly labelled as AI generated so that users are aware when they come across such content.

Supporting innovation

The law aims to offer start-ups and small and medium-sized enterprises opportunities to develop and train AI models before their release to the general public.

That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world.

Next steps

The Parliament adopted the Artificial Intelligence Act in March 2024 and the Council followed with its approval in May 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

The ban of AI systems posing unacceptable risks will apply six months after the entry into force
Codes of practice will apply nine months after entry into force
Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

Source: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Which risks will be covered by the new AI rules?

The deployment of #AI systems has great potential to deliver societal benefits, economic growth and boost #EU innovation as well as global competitiveness. In some cases, however, the specific characteristics of some AI systems may lead to new risks related to consumer safety and fundamental rights. Some powerful AI models that are widely used could even pose systemic #risks.
This leads to legal uncertainty for companies and a potentially slower uptake of AI technologies among businesses and citizens due to a lack of trust. An unsynchronized regulatory response by national authorities risks fragmenting the internal market.

Why should we regulate the use of artificial intelligence?

The potential benefits of artificial intelligence (#AI) for our societies are numerous—from better medical care to better education. Given the rapid technological development of #AI, the #EU is determined to act as one to make good use of these opportunities.

The #EU AI Act is the first comprehensive #AI legislation in the world. It aims to address risks to health, safety and fundamental rights. The regulation also protects democracy, the rule of law and the environment.

Although most AI systems involve little or no risk, some AI systems create risks that must be accounted for in order to avoid undesirable outcomes.

For example, the non-transparency of many algorithms can lead to uncertainty and hinder the effective implementation of existing safety and fundamental rights legislation. In response to these challenges, legislative action was needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately considered.

This includes applications such as biometric identification systems or decisions made by AI that affect important personal interests, for example in employment, education, healthcare or law enforcement.

In front of 500 guests, dr. Galia Mancheva, shared the latest news regarding the regulation of artificial intelligence by the EU

On April 19th, 2024, dr. Galia Mancheva – the CEO of Ai advy, took part at the the only workshop of its kind by entrepreneurs for entrepreneurs.

Dr. Mancheva has attended the discussion panel “Products and services of the future. Innovations and automation in business development”, sharing the latest news on EU AI act.

Watch the panel discussion: https://www.youtube.com/watch?v=IM7vsZXh4Fc

Sales Club Conference is a modern conference focused on the needs, problems and challenges of modern entrepreneurs. At this stage, SCC is not just a popular corporate forum, but a growing community of active business leaders and professionals who want to develop their careers.