The EU AI Act: A New Era of Business Opportunities
/0 Comments/in Articles /by Ralitsa HristovaThe EU AI Act: A New Era of Business Opportunities
The EU AI Act is not just about compliance—it’s a catalyst for business growth, innovation, and competitiveness. As the world’s first comprehensive AI regulation, it creates a clear legal framework that fosters trust, attracts investment, and stimulates market expansion. Here’s how businesses can leverage the Act to thrive in the AI-driven economy.
1. Strengthening Consumer and Investor Trust
One of the biggest barriers to AI adoption is public skepticism around transparency, bias, and data security. The EU AI Act mandates strict risk assessment, documentation, and human oversight, ensuring ethical AI deployment. Businesses that comply will gain a competitive edge by demonstrating responsible AI use, attracting customers and investors who prioritize trustworthy and ethical AI solutions.
2. Unlocking Access to New Markets
The EU AI Act harmonizes regulations across all 27 member states, reducing fragmentation and creating a unified market for AI-driven businesses. Instead of navigating different national laws, companies can scale AI solutions across Europe more easily. This clarity lowers legal risks and encourages cross-border expansion, particularly for startups and SMEs.
3. Accelerating AI Innovation through Regulatory Sandboxes
To balance regulation and innovation, the EU AI Act introduces regulatory sandboxes—controlled environments where businesses can test AI applications under real-world conditions with regulatory guidance. This is a golden opportunity for startups and enterprises to experiment with cutting-edge AI technologies without facing immediate regulatory hurdles.
4. Boosting AI Investment and Funding Opportunities
With a stable regulatory environment, investors are more willing to back AI ventures, knowing they are compliant with EU law. Public and private sectors are also expected to increase funding for AI research and development, particularly in sectors like healthcare, finance, and manufacturing where high-risk AI applications require compliance with the Act.
5. Competitive Advantage for Compliant AI Businesses
Businesses that proactively align with the EU AI Act will be first movers in a trusted AI ecosystem. This could result in:
✔ Stronger partnerships with organizations that require AI compliance
✔ Enhanced brand reputation as a responsible AI leader
✔ Early adoption of best practices, leading to smoother transitions when stricter global AI regulations emerge
What’s Your Take?
How do you see the EU AI Act influencing your industry? Are you already preparing for compliance? Book a free call with us at: https://lnkd.in/dNGhkv3T
EU AI Act – risks and application
/0 Comments/in Articles /by Ralitsa Hristova
On August 1, 2024, the European Artificial Intelligence Act (AI Act) came into force, marking the world’s first comprehensive regulation on artificial intelligence. Its goal is to limit AI processes that pose unacceptable risks, set clear requirements for high-risk systems, and impose specific obligations on implementers and providers.
To whom does the AI Act apply?
The legislative framework applies to both public and private entities within and outside the EU if the AI system is marketed in the Union or its use impacts individuals located in the EU. Obligations can apply to both providers (e.g., developers of resume screening tools) and those implementing AI systems (e.g., a bank that has purchased the resume screening tool). There are some exceptions to the regulation, such as activities in research, development, and prototyping, AI systems created exclusively for military and defense purposes, or for national security purposes, etc.
What are the risk categories?
The Act introduces a unified framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:
- Minimal risk: For most AI systems, such as spam filters and AI-based video games, the AI Act does not impose requirements, but companies can voluntarily adopt additional codes of conduct.
- Specific transparency risk: Systems like chatbots must clearly inform users that they are interacting with a machine, and certain AI-generated content must be labeled as such.
- High risk: High-risk AI systems, such as AI-based medical software or AI systems used for recruitment, must meet strict requirements, including risk mitigation systems, high-quality datasets, clear information for users, human oversight, etc.
- Unacceptable risk: AI systems that enable “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore prohibited.
When will the AI Act be fully applicable?
The EU AI Act will apply two years after entry into force on 2 August 2026, except for the following specific provisions:
- The prohibitions, definitions and provisions related to AI literacy will apply 6 months after entry into force, not later than 2 February 2025;
- The rules on governance and the obligations for general purpose AI become applicable 12 months after entry into force, not later than 2 August 2025;
- The obligations for high-risk AI systems that classify as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonisation legislation), apply 36 months after entry into force, not later than 2 August 2027.
What will be the benefits for companies from the introduction of this act?
Europe is taking significant steps to regulate artificial intelligence and promote investment in innovation and deep technologies. The European Innovation Council (EIC) plans to invest €1.4 billion in deep technologies and high-potential startups from the EU in 2025. This is stated in the EIC Work Programme for 2025, which includes an increase of €200 million compared to 2024. The goal is to foster a more sustainable innovation ecosystem in Europe.
What are the penalties for infringement of the EU AI Act?
/0 Comments/in Articles /by Ralitsa Hristova
Penalties for infringement
Member States will have to lay down effective, proportionate and dissuasive penalties for infringements of the rules for AI systems.
The Regulation sets out thresholds that need to be taken into account:
- Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
- Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation;
- Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request;
For each category of infringement, the threshold would be the lower of the two amounts for SMEs and the higher for other companies.
The Commission can also enforce the rules on providers of general-purpose AI models by means of fines, taking into account the following threshold: - Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the obligations or measures requested by the Commission under the Regulation.
EU institutions, agencies or bodies are expected to lead by example, which is why they will also be subject to the rules and to possible penalties. The European Data Protection Supervisor will have the power to impose fines on them in case of non-compliance.
EU AI Act: first regulation on artificial intelligence
/0 Comments/in Articles /by dr. Galia ManchevaAs part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.
Learn more about what artificial intelligence is and how it is used
What Parliament wants in AI legislation
Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.
Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.
Learn more about Parliament’s work on AI
Learn more about Parliament’s vision for AI’s future
AI Act: different rules for different risk levels
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.
Unacceptable risk
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
Biometric identification and categorisation of people
Real-time and remote biometric identification systems, such as facial recognition
Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.
High risk
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into specific areas that will have to be registered in an EU database:
Management and operation of critical infrastructure
Education and vocational training
Employment, worker management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control management
Assistance in legal interpretation and application of the law.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.
Transparency requirements
Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:
Disclosing that the content was generated by AI
Designing the model to prevent it from generating illegal content
Publishing summaries of copyrighted data used for training
High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.
Content that is either generated or modified with the help of AI – images, audio or video files (for example deepfakes) – need to be clearly labelled as AI generated so that users are aware when they come across such content.
Supporting innovation
The law aims to offer start-ups and small and medium-sized enterprises opportunities to develop and train AI models before their release to the general public.
That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world.
Next steps
The Parliament adopted the Artificial Intelligence Act in March 2024 and the Council followed with its approval in May 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:
The ban of AI systems posing unacceptable risks will apply six months after the entry into force
Codes of practice will apply nine months after entry into force
Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force
High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.
Source: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Who does the AI Act apply to?
/0 Comments/in Articles /by Ralitsa HristovaThe legislative framework will apply to both public and private entities inside and outside the #EU if the AI system is placed on the Union market or its use affects persons located in the #EU.
It can apply to both providers (e.g. the developer of a resume screening tool) and those implementing high-risk AI systems (e.g. a bank that purchased a resume screening tool). Importers of AI systems will must also ensure that the foreign supplier has already carried out the relevant conformity assessment procedure and that the relevant AI system bears a European Conformity Mark (CE) and is accompanied by the necessary documentation and instructions for use.
In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative #AI models.
Free open source model providers are exempt from most of these obligations. This exemption does not cover the obligations of providers of general purpose AI models with systemic risks.
The obligations also do not apply to pre-market research, development and prototyping activities, and the regulation does not apply to #AI systems that are exclusively for military and defense purposes or for purposes in the field of national security, regardless of the type of entity performing these activities
Which risks will be covered by the new AI rules?
/0 Comments/in Articles /by Ralitsa HristovaThe deployment of #AI systems has great potential to deliver societal benefits, economic growth and boost #EU innovation as well as global competitiveness. In some cases, however, the specific characteristics of some AI systems may lead to new risks related to consumer safety and fundamental rights. Some powerful AI models that are widely used could even pose systemic #risks.
This leads to legal uncertainty for companies and a potentially slower uptake of AI technologies among businesses and citizens due to a lack of trust. An unsynchronized regulatory response by national authorities risks fragmenting the internal market.
Why should we regulate the use of artificial intelligence?
/0 Comments/in Articles /by Ralitsa HristovaThe potential benefits of artificial intelligence (#AI) for our societies are numerous—from better medical care to better education. Given the rapid technological development of #AI, the #EU is determined to act as one to make good use of these opportunities.
The #EU AI Act is the first comprehensive #AI legislation in the world. It aims to address risks to health, safety and fundamental rights. The regulation also protects democracy, the rule of law and the environment.
Although most AI systems involve little or no risk, some AI systems create risks that must be accounted for in order to avoid undesirable outcomes.
For example, the non-transparency of many algorithms can lead to uncertainty and hinder the effective implementation of existing safety and fundamental rights legislation. In response to these challenges, legislative action was needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately considered.
This includes applications such as biometric identification systems or decisions made by AI that affect important personal interests, for example in employment, education, healthcare or law enforcement.
Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could
/0 Comments/in Articles, News /by dr. Galia Mancheva“The EU’s AI Act needs to be closely monitored to avoid overburdening innovative AI developers… with unnecessary red tape,” one industry body warns.
Europe’s growing generative artificial intelligence (GenAI) landscape is highly competitive, but additional regulation could stifle innovation, a new study has found.
The report by Copenhagen Economics said that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The study comes as EU regulators try to tighten rules on regulating competition in the AI market with its Digital Markets Act, EU AI Act, and AI office.
The Computer & Communications Industry Association (CCIA Europe), which commissioned the independent report, warned that regulatory intervention would be premature, slow down innovation and growth, and reduce consumer choice in generative AI.
“Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants,” said Aleksandra Zuchowska, CCIA Europe’s Competition Policy Manager.
“Instead, the impact of new AI-specific rules, such as the EU’s recently adopted AI Act, needs to be closely monitored to avoid overburdening innovative AI developers with disproportionate compliance costs and unnecessary red tape”.
The authors of the study noted that there are a growing number of foundation model developers active in the EU, such as Mistral AI and Aleph Alph and recognised that Europe’s GenAI sector is vibrant.
Competition concerns
But while they said there were no competition concerns in the short-term, there may be some in the near future, which include uncertainty for GenAI start-ups as they face challenges in growing and regulatory costs, such as from the EU AI Act.
The study also warned there are potential competition concerns, which include limited access to data, partnerships between large companies and smaller ones, and leveraging behaviour by big companies.
France’s Mistral AI is a prime example of a start-up that signed a partnership with Big Tech after it made its large language model (LLM) available to Microsoft Azure customers in February and gave Microsoft a minor stake in the AI company.
The study noted that if a larger partner uses its market power to exercise decisive control over a start-up or gain privileged or exclusive access to its technology, it could harm competition.
But it said partnerships are less likely to create competition concerns if there are no or limited exclusivity conditions and limited privileged access to the startup’s valuable technological assets.
Source: https://www.euronews.com/next/2024/03/22/could-the-new-eu-ai-act-stifle-genai-innovation-in-europe-a-new-study-says-it-could
General Inquiries
Support
Press
Postal Address
“Acad. Stefan Mladenov” 1,
1700 Sofia, Bulgaria

Categories
Archive
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- June 2024
- May 2024
- April 2024
- March 2024
- January 2024
- November 2023
- October 2023
- June 2023
- December 2022
- September 2021
- June 2021
- May 2021
- April 2021
- March 2021
- January 2021
- September 2019
- June 2019
- May 2019
- January 2019
- December 2018
- January 2013
The EU AI Act is not just a regulatory hurdle—it’s a game-changer for AI-driven innovation. Businesses across industries are rethinking their AI strategies to align with compliance, ethics, and responsible AI adoption. But beyond legal obligations, the Act presents an opportunity: companies that adapt early can lead the future of ethical AI innovation.
Key Changes: How the EU AI Act is Reshaping AI Development
1️⃣ High-Risk AI: More Oversight, More Trust
Under the Act, AI systems used in finance, healthcare, HR, law enforcement, and education are classified as high-risk. These systems must:
✅ Undergo rigorous risk assessments and bias testing
✅ Ensure human oversight in decision-making
✅ Provide clear documentation and explainability
What This Means for Businesses:
Companies must redesign AI models to be more interpretable and auditable, shifting the focus from “black-box AI” to trustworthy AI.
2️⃣ Transparency & Accountability: The New AI Standard
One of the Act’s most impactful requirements is transparency. AI providers must:
– Disclose training data sources (to prevent bias and copyright violations)
– Label AI-generated content to prevent misinformation
– Implement strong governance frameworks for continuous monitoring
The Business Impact:
AI leaders who invest in ethical AI development will win consumer trust and differentiate themselves in the market.
3️⃣ General-Purpose AI (GPAI): The Next Big Challenge
Foundation models and large-scale AI systems (such as ChatGPT, Gemini, and Claude) must comply with:
– Risk mitigation plans to prevent harmful applications
– Increased scrutiny on AI-generated content (deepfakes, misinformation, etc.)
– Audit requirements for bias and safety
The Innovation Shift:
Big Tech and AI startups must balance innovation speed with ethical safeguards, leading to more responsible AI models.
What’s Next? The Future of AI Innovation in Business
The EU AI Act is a turning point for AI-driven industries. Businesses that embrace compliance as a competitive advantage will lead the next era of trustworthy and ethical AI.