Our CEO, Dr. Galya Mancheva, will be leading an insightful online training “EU AI Act” on February 20, 2025

Master Events announces online seminar on the European Artificial Intelligence Act, leaded by our CEO, Dr. Galya Mancheva.

Master Events is pleased to invite professionals and organizations to an online seminar titled “The European Artificial Intelligence Act,” scheduled for February 20, 2025. This half-day seminar aims to provide a comprehensive understanding of the recently enacted European AI Act, its objectives, regulatory scope, and the obligations it imposes on organizations.

Seminar Highlights:

  • Introduction to the AI Act: An overview of the development and scope of the regulation.
  • Key Definitions: Clarification of essential terms related to artificial intelligence as defined by the Act.
  • Regulatory Objectives: Insight into the primary goals the Act seeks to achieve within the EU.
  • Risk-Based Approach: Understanding the Act’s methodology in categorizing AI systems based on associated risks.
  • Risk Levels and High-Risk Systems: Detailed discussion on different risk categories and the specific requirements for high-risk AI systems.
  • Risk Management for High-Risk Systems: Strategies and best practices for managing risks associated with high-risk AI applications.
  • Penalties and Sanctions: Information on the fines and sanctions for non-compliance with the Act.
  • Advantages and Disadvantages: A balanced view of the benefits and potential drawbacks of the Act.
  • EU Incentives: Overview of incentives provided by the EU to encourage compliance and innovation in AI.

About the European AI Act:

Enacted on August 1, 2024, the European Artificial Intelligence Act is the world’s first comprehensive regulation on artificial intelligence. It aims to restrict AI processes that pose unacceptable risks, establish clear requirements for high-risk systems, and impose specific obligations on implementers and providers. The legislative framework applies to both public and private entities within and outside the EU if the AI system is marketed in the Union or its use impacts individuals within the EU.

Who Should Attend:

This seminar is designed for professionals and organizations involved in the development, implementation, or oversight of AI systems, including:

  • AI Developers and Engineers
  • Compliance Officers
  • Legal Advisors
  • Risk Management Professionals
  • Policy Makers
  • Academic Researchers

Registration Details:

Date: February 20, 2025

Format: Online Seminar

Remaining Seats: 10 (Limited availability to ensure effective learning and engagement)

Participants will benefit from high-quality presentations, practical insights, and the opportunity to have their questions and case studies addressed. The seminar will be conducted through an innovative and user-friendly online platform, ensuring a seamless learning vexperience.

Master Events guarantees 100% satisfaction. If participants are not fully satisfied, a refund will be provided.

 

About Master Events:

Master Events specializes in organizing online seminars, trainings, and conferences for companies and governmental institutions. In addition to open courses and trainings, we organize in-house seminars tailored to your requirements.

For more information and to register for the seminar, please visit:  Онлайн обучение – Европейският акт за изкуствения интелект

Contact:

Master Events

Email: info@masterevents.bg

Phone: +359 2 123 4567

Website: MASTER EVENTS – Онлайн семинари и обучения 2024

Stay connected with us on social media:

Facebook:  Master Events | Sofia | Facebook

LinkedIn:  Master Events BG: Overview | LinkedIn

Join us to gain a thorough understanding of the European AI Act and its implications for your organization.

EU AI Act – risks and application

 

On August 1, 2024, the European Artificial Intelligence Act (AI Act) came into force, marking the world’s first comprehensive regulation on artificial intelligence. Its goal is to limit AI processes that pose unacceptable risks, set clear requirements for high-risk systems, and impose specific obligations on implementers and providers.

To whom does the AI Act apply?

The legislative framework applies to both public and private entities within and outside the EU if the AI system is marketed in the Union or its use impacts individuals located in the EU. Obligations can apply to both providers (e.g., developers of resume screening tools) and those implementing AI systems (e.g., a bank that has purchased the resume screening tool). There are some exceptions to the regulation, such as activities in research, development, and prototyping, AI systems created exclusively for military and defense purposes, or for national security purposes, etc.

What are the risk categories?

The Act introduces a unified framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:

  • Minimal risk: For most AI systems, such as spam filters and AI-based video games, the AI Act does not impose requirements, but companies can voluntarily adopt additional codes of conduct.
  • Specific transparency risk: Systems like chatbots must clearly inform users that they are interacting with a machine, and certain AI-generated content must be labeled as such.
  • High risk: High-risk AI systems, such as AI-based medical software or AI systems used for recruitment, must meet strict requirements, including risk mitigation systems, high-quality datasets, clear information for users, human oversight, etc.
  • Unacceptable risk: AI systems that enable “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore prohibited.

When will the AI Act be fully applicable?

The EU AI Act will apply two years after entry into force on 2 August 2026, except for the following specific provisions:

  • The prohibitions, definitions and provisions related to AI literacy will apply 6 months after entry into force, not later than  2 February 2025;
  • The rules on governance and the obligations for general purpose AI become applicable 12 months after entry into force, not later than 2 August 2025;
  • The obligations for high-risk AI systems that classify as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonisation legislation), apply 36 months after entry into force, not later than 2 August 2027.

What will be the benefits for companies from the introduction of this act?

Europe is taking significant steps to regulate artificial intelligence and promote investment in innovation and deep technologies. The European Innovation Council (EIC) plans to invest €1.4 billion in deep technologies and high-potential startups from the EU in 2025. This is stated in the EIC Work Programme for 2025, which includes an increase of €200 million compared to 2024. The goal is to foster a more sustainable innovation ecosystem in Europe.