Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could

“The EU’s AI Act needs to be closely monitored to avoid overburdening innovative AI developers… with unnecessary red tape,” one industry body warns.


Europe’s growing generative artificial intelligence (GenAI) landscape is highly competitive, but additional regulation could stifle innovation, a new study has found.

The report by Copenhagen Economics said that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The study comes as EU regulators try to tighten rules on regulating competition in the AI market with its Digital Markets Act, EU AI Act, and AI office.

The Computer & Communications Industry Association (CCIA Europe), which commissioned the independent report, warned that regulatory intervention would be premature, slow down innovation and growth, and reduce consumer choice in generative AI.

“Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants,” said Aleksandra Zuchowska, CCIA Europe’s Competition Policy Manager.

“Instead, the impact of new AI-specific rules, such as the EU’s recently adopted AI Act, needs to be closely monitored to avoid overburdening innovative AI developers with disproportionate compliance costs and unnecessary red tape”.

The authors of the study noted that there are a growing number of foundation model developers active in the EU, such as Mistral AI and Aleph Alph and recognised that Europe’s GenAI sector is vibrant.

Competition concerns

But while they said there were no competition concerns in the short-term, there may be some in the near future, which include uncertainty for GenAI start-ups as they face challenges in growing and regulatory costs, such as from the EU AI Act.

The study also warned there are potential competition concerns, which include limited access to data, partnerships between large companies and smaller ones, and leveraging behaviour by big companies.

France’s Mistral AI is a prime example of a start-up that signed a partnership with Big Tech after it made its large language model (LLM) available to Microsoft Azure customers in February and gave Microsoft a minor stake in the AI company.

The study noted that if a larger partner uses its market power to exercise decisive control over a start-up or gain privileged or exclusive access to its technology, it could harm competition.

But it said partnerships are less likely to create competition concerns if there are no or limited exclusivity conditions and limited privileged access to the startup’s valuable technological assets.


Draft UN resolution on AI aims to make it ‘safe and trustworthy’

A new draft resolution aims to close the digital divide on artificial intelligence.

The United States is spearheading the first United Nations resolution on artificial intelligence (AI), aimed at ensuring the new technology is “safe, secure and trustworthy” and that all countries, especially those in the developing world, have equal access.

The draft General Assembly resolution aims to close the digital divide between countries and make sure they are all at the table in discussions on AI — and that they have the technology and capabilities to take advantage of its benefits, including detecting diseases, predicting floods and training the next generation of workers.

The draft recognises the rapid acceleration of AI development and use and stresses “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems”.

It also recognises that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches.

Fostering ‘safe and trustworthy’ AI

US National Security Advisor Jake Sullivan said the United States turned to the General Assembly “to have a truly global conversation on how to manage the implications of the fast-advancing technology of AI”.

The resolution “would represent global support for a baseline set of principles for the development and use of AI and would lay out a path to leverage AI systems for good while managing the risks,” he said in a statement to The Associated Press.

If approved, Sullivan said, “this resolution will be an historic step forward in fostering safe, secure and trustworthy AI worldwide.”

The United States began negotiating with the 193 UN member nations about three months ago, spent hundreds of hours in direct talks with individual countries, 42 hours in negotiations and accepted input from 120 nations, a senior US official said.

The resolution went through several drafts and achieved consensus support from all member states this week and will be formally considered later this month, the official said, speaking on condition of anonymity because he was not authorised to speak publicly.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are an important barometre of world opinion.

A key goal, according to the draft resolution, is to use AI to help spur progress toward achieving the UN’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The draft resolution encourages all countries, regional and international organisations, technical communities, civil society, the media, academia, research institutions and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.

It warns against “improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law”.

New AI regulations

Lawmakers in the European Union are set to give final approval to the world’s first comprehensive AI rules on Wednesday. Countries around the world, including the US and China, or global groupings like the Group of 20 industrialised nations also are moving to draw up AI regulations.

The US draft calls on the 193 UN member states and others to assist developing countries in accessing the benefits of digital transformation and safe AI systems. It “emphasises that human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of artificial intelligence systems.”

US Ambassador Linda Thomas-Greenfield recalled President Joe Biden’s address to the General Assembly last year where he said emerging technologies, including AI, hold enormous potential.

She said the resolution, which is co-sponsored by dozens of countries, “aims to build international consensus on a shared approach to the design, development, deployment and use of AI systems,” particularly to support the 2030 UN goals.

The resolution responds to “the profound implications of this technology,” Thomas-Greenfield said, and if adopted it will be “an historic step forward in fostering safe, security and trustworthy AI worldwide.”


EU Policy. Lawmakers approve AI Act with overwhelming majority

Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI systems according to a risk-based approach.

Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI according to a risk-based approach with an overwhelming majority. The law passed with 523 votes in favour, 46 against and 49 abstentions.

The act, which needed final endorsement after approval on political and technical level, will now most likely enter into force this May.

Parliament AI Act co-lead, Italian lawmaker Brando Benifei (S&D), described it as “a historic day” in a subsequent press conference.

“We have the first regulation in the world which puts a clear path for a safe and human centric development of AI. We have now got a text that reflects the parliament’s priorities,” he said.

“The main point now will be implementation and compliance of businesses and institutions. We are also working on further legislation for the next mandate such as a directive on conditions in the workplace and AI,” Benifei said.

His counterpart Dragoş Tudorache (Romania/Renew), told the same conference that the EU looks at partner countries to ensure a global impact of the rules. “We have to be open to work with others on how to promote these rules, and build a governance with like-minded parties,” he said.

Entry into force

Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.

The general-purpose AI rules will apply one year after entry into force, in May 2025, and the obligations for high-risk systems in three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission. It will now be up to the member states to set up national oversight agencies. The commission told Euronews that countries have 12 months to nominate these watchdogs.

In a response to today’s vote, Cecilia Bonefeld-Dahl, head of EU trade organisation Digital Europe, said that more needs to be done to keep companies based in Europe.

“Today, only 3% of the world’s AI unicorns come from the EU, with about 14 times more private investment in AI in the US and five times more in China. By 2030, the global AI market is expected to reach $1.5 trillion, and we need to ensure that European companies tap into that without getting tangled up in red tape,” Bonefeld-Dahl said.

Ursula Pachl, Deputy Director General of the European Consumer Organisation (BEUC), welcomed the approval of the law and said it will help consumers to join collective redress claims if they have been harmed by the same AI system.

“Although the legislation should have gone further to protect consumers, the top priority for the European Commission and national governments should now be to show they are serious about the AI Act by implementing it without delay and providing the relevant regulators that will enforce it with the necessary resources,” Pachl said.


EU Policy. As the EU AI Act enters into force, focus shifts to countries’ oversight appointments

Member states have 12 months to nominate national competent authorities tasked with compliance.

Europe’s AI Act, the world’s first attempt to regulate AI systems according to a risk-based approach, will get the final nod in the European Parliament tomorrow (13 March), paving the way for the rules to finally enter into force. However, when it comes to oversight; member states are still in the early stages of determining which regulator is best placed to oversee compliance.

Tomorrow, the lawmakers will vote on the text without the minor linguistic changes made by the lawyers during the translation phase. These also need to be formally approved, either by a separate vote in the April plenary or by a formal announcement. The rules will then be published in the EU official journal which is likely to happen in May.

Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.

In November, bans on prohibited practises specified in the AI Act will apply. The general-purpose AI rules will apply one year after entry into force, May 2025, and the obligations for high-risk systems in three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission.

A European Commission spokesperson told Euronews that the countries have 12 months to nominate relevant national competent authorities, and the commission “awaits notification of these appointments in due course”.


Spain was the first EU country to set up an Agency for the Supervision of Artificial Intelligence (AESIA) in A Coruña in 2023. In other countries such as the Netherlands, the data protection authority set up a department dealing with algorithms last year. The office currently has 12 employees and the expectation is that it will grow to 20 people this year.

In the case of Ireland, the Department of Enterprise, Trade and Employment will lead the development of the national implementation plan for the AI Act. A spokesperson told Euronews however that “as the EU legislative process has not yet completed, and the Act has not yet been adopted, it would be premature to speculate about the national arrangements for enforcement.”

The Luxembourg Department of Media, Telecommunications and Digital Policy said that the country is “working hard to implement the AI Act”.

“We are consulting with all the relevant stakeholders, first and foremost the existing authorities and regulators that will have a role to play in the new governance framework. An important aspect for us is an efficient coordination between regulators, as we want to make it as clear as possible for businesses and citizens to interact with the new legislation,” a spokesperson said.

Meanwhile, the commission also began its recruitment process for policy and technical jobs at the AI Office, which will help with delivering a harmonised approach of the rules across the member states via information exchanges. The deadline for applications is 27 March.

Trade organisations have warned about the lack of implementation and enforcement now that the rules are on the brink of entering into force. CCIA Europe, which represents Big Tech companies, previously warned that the implementation will be crucial “to not overburden companies” that try to innovate.

Digital Europe, representing both tech companies and national trade associations, called in the AI Act trilogue negotiations for a 48-month transitional period to ensure the whole ecosystem is ready, as well as for the timely availability of harmonised standards for industry to comply with the rules.


AI excellence thriving from the lab to the market

The Communication on boosting startups and innovation in trustworthy AI outlines concrete joint actions for the Commission and Member States to create global leadership on trustworthy AI.

Strategies for developing and adopting trustworthy AI

The strategic framework set out in the Communication on boosting startups and innovation in trustworthy artificial intelligence aims to foster an innovative, fair, open and contestable AI market. It not only bolstering European companies at home but also empowers them to compete confidently on the global stage. This Communication build upon the existing European approach to excellence in AI, in particular the Coordinated Plan on AI.

The 2021 review of the Coordinated Plan includes proposals to make the EU the place where artificial intelligence (AI) excellence thrives from the lab to the market by promoting research in AI, encouraging its uptake, and funding innovation. The Commission strategy for supporting the development and adoption of trustworthy AI solutions is a values-driven approach which covers the whole lifecycle and focuses on building an ecosystem of excellence. The key actions seek to:

  • foster collaboration between stakeholders through Public-Private Partnerships (PPP);
  • build and mobilise AI research capabilities;
  • provide facilities for developers and companies to test and experiment with their latest AI-based technologies in real-world environments;
  • develop a European marketplace for trustworthy AI solutions, connecting resources and services;
  • fund and scale innovative ideas and solutions for AI.

European partnership on AI, Data and Robotics

The European Partnership on AI, Data and Robotics Association (ADRA) was officially launched in June 2021 with the signature of the Memorandum of Understanding between the European Commission and the ADR association. ADRA was awarded 2,6 Billion Euros under the funding programmes Horizon 2020 (2014-2020) and Horizon Europe (2021-2027). This public-private partnership brings together communities from three key communities – AI, Data and Robotics –  and five organisations: BDVA, CLAIREELLISEurAI and euRobotics .

In 2024, ADRA counts more than 133 members, from startups, SMEs, big industry as well as research organisations and RTOs. In November 2023, ADRA organised its first event – the ADR Forum which focused on Generative AI. ADRA engages with many stakeholders (including the industry), to tackle the challenges of generative AI. This lead to the establishment  of 2 taskforces in 2023:

  • GenAI – dedicated to addressing challenges related to generative AI.
  • Generative AI for Robotics – focused on generative AI applications in the field of robotics.

Getting world reference AI capabilities in Europe

The European Networks of Excellence in AI (NoEs) bring together the brightest AI minds in Europe to tackle challenging AI problems. NoEs benefit from funding under both Horizon 2020 and Horizon Europe and are driving research and development forward. These world-class networks encompass a significant portion of Europe’s top AI expertise. They foster synergy and a critical mass in European AI to help translating AI advancements into tangible real-world impacts across various domains.

As of 2023, the EU AI & Robotics NoEs community includes the following nine projects:

NoEs are strongly linked to ADRA. NoEs are the private side of a public-private partnership that plays a key role in bringing knowledge from the European research community to industry.

World-class testing and experimentation facilities

From January 2023, 4 sectorial testing and experimentation facilities (TEFs) have started providing European SMEs with the means to test their latest AI-based technologies  (Technology Readiness Levels 6-8)  in real or close-to-real conditions. TEF services involve support for full integration, the testing, experimentation, and validation of AI-based solutions in the sectors served by the four TEFs:

The TEFs are co-funded by the Commission and Member States with a total investment of over 220 million Euros and help innovators to test their close-to-market AI solutions before introducing them to the market.

In March 2024, the four Testing and Experimentation Facilities (TEFs) held their official event, signalling their readiness to engage with businesses. These TEFs provide technical assistance to AI innovators, allowing them to evaluate and experiment with their cutting-edge AI software and hardware in real-world scenarios (specifically, at Technology Readiness Levels 6-8). The services offered encompass full integration support, as well as testing, experimentation, and validation of AI-based solutions across the sectors served by these four TEFs:

The European AI-on-demand platform

The AI on demand platform aims at building a bridge between European AI research, industry and public services. It is a catalyst to light up collaboration between academia and businesses but also among other stakeholders, reducing the gap between AI research and innovators. The European AI-on-Demand wants to enable the latest  AI advances to be rapidly transformed into real AI products and services deployed into the market.

The platform is supported by an ecosystem of projects funded under Horizon 2020 and Horizon Europe (focus on research), and Digital Europe programme (focus on industry and public administration). The current version of the platform includes several services:

  • An assets catalogue
  • A powerful Virtual AI lab where researchers can explore the contents related to AIoD (Artificial Intelligence of Data) across multiple platforms
  • The ability to create their own libraries
  • Development, training, and sharing of AI pipelines
  • An extended and adapted marketplace catering to industry needs in upcoming versions.

European Digital Innovation Hubs

A network of more than 200 European Digital Innovation Hubs (EDIHs) covering all regions of Europe aims to stimulate the broad uptake of AI, HPC, cybersecurity and other digital technologies. EDIHs help all companies seeking to use AI technologies to become more competitive on business/production processes, products or services. As ‘one-stop-shops’, EDIHs can promote locally the use of tools from the AI-on-demand platform to industry, including small and medium-sized enterprises, and the public sector. They will assist companies to innovate their new products and services with AI and stimulate adoption by helping to make them market ready.

At present, there are around 150 EDIHs speciliased in AI and decision support. Several trainings and webinars on the topic of AI (including generative AI), are frequently organised. Also, a dedicated working group on AI in public administration was created.

Fund and scale innovative ideas and solutions for AI

Through the Horizon Europe and Digital Europe programmes, the Commission plans to invest €1 billion per year in AI, with measures to fund and scale innovative ideas and solutions. This will build bridges between Europe’s strong AI research community and innovators, in particular start-ups and SMEs both in their early stages and in the scale-up phase. The aim is to mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of this decade. The Recovery and Resilience Facility, the largest stimulus package ever financed through the EU budget, makes €134 billion available for digital and allows Europe to amplify its ambitions and become global leader in developing cutting-edge trustworthy AI.


EU parliament greenlights landmark artificial intelligence regulations

Europe moves closer to adopting world’s first AI rules as EU lawmakers endorse provisional agreement.

The European Parliament has given final approval to wide-ranging rules to govern artificial intelligence.

The far-reaching regulation – the Artificial Intelligence Act – was passed by lawmakers on Wednesday. Senior European Union officials said the rules, first proposed in 2021, will protect citizens from the possible risks of a technology developing at breakneck speed while also fostering innovation.

Brussels has sprinted to pass the new law since Microsoft-backed OpenAI’s ChatGPT arrived on the scene in late 2022, unleashing a global AI race.

Just 46 lawmakers in the European Parliament in Strasbourg voted against the proposal. It won the support of 523 MEPs.

The European Council is expected to formally endorse the legislation by May. It will be fully applicable 24 months after its entry into force.

The rules will cover high-impact, general-purpose AI models and high-risk AI systems, which will have to comply with specific transparency obligations and EU copyright laws.

The act will regulate foundation models or generative AI, such as OpenAI, that are trained on large volumes of data to generate new content and perform tasks.

Government use of real-time biometric surveillance in public spaces will be restricted to cases of certain crimes; prevention of genuine threats, such as “terrorist” attacks; and searches for people suspected of the most serious crimes.

“Today is again an historic day on our long path towards regulation of AI,” said Brando Benifei, an Italian lawmaker who pushed the text through parliament with Romanian MEP Dragos Tudorache.

“[This is] the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI,” he said.

“We managed to find that very delicate balance between the interest to innovate and the interest to protect,” Tudorache told journalists.

The EU’s internal market commissioner, Thierry Breton, hailed the vote.

“I welcome the overwhelming support from the European Parliament for the EU AI Act,” he said. “Europe is now a global standard-setter in trustworthy AI.”

AI policing restrictions

The EU’s rules take a risk-based approach: the riskier the system, the tougher the requirements – with outright bans on the AI tools deemed to carry the most threat.

For example, high-risk AI providers must conduct risk assessments and ensure their products comply with the law before they are made available to the public.

“We are regulating as little as possible and as much as needed with proportionate measures for AI models,” Breton told the Agence France-Presse news agency.

Violations can see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2m to $38.2m), depending on the type of infringement and the firm’s size.

There are strict bans on using AI for predictive policing and systems that use biometric information to infer an individual’s race, religion or sexual orientation.

The rules also ban real-time facial recognition in public spaces with some exceptions for law enforcement. Police must seek approval from a judicial authority before any AI deployment.

Lobbies vs watchdogs

Because AI will likely transform every aspect of Europeans’ lives and big tech firms are vying for dominance in what will be a lucrative market, the EU has been subject to intense lobbying over the legislation.

Watchdogs have pointed to campaigning by French AI start-up Mistral AI and Germany’s Aleph Alpha as well as US-based tech giants like Google and Microsoft.

They warned the implementation of the new rules “could be further weakened by corporate lobbying”, adding that research showed “just how strong corporate influence” was during negotiations.

“Many details of the AI Act are still open and need to be clarified in numerous implementing acts, for example, with regard to standards, thresholds or transparency obligations,” three watchdogs based in Belgium, France and Germany said.

Breton stressed that the EU “withstood the special interests and lobbyists calling to exclude large AI models from the regulation”, maintaining: “The result is a balanced, risk-based and future-proof regulation.”

Tudorache said the law was “one of the … heaviest lobbied pieces of legislation, certainly in this mandate”, but insisted: “We resisted the pressure.”


European AI Office

The European AI Office will be the centre of AI expertise across the EU. It will play a key role in implementing the AI Act – especially for general-purpose AI – foster the development and use of trustworthy AI, and international cooperation.

The European AI Office will support the development and use of trustworthy AI, while protecting against AI risks. The AI Office was established within the European Commission as the centre of AI expertise and forms the foundation for a single European AI governance system.

The EU aims to ensure that AI is safe and trustworthy. For this purpose, the AI Act is the first-ever comprehensive legal framework on AI worldwide, guaranteeing the health, safety and fundamental rights of people, and providing legal certainty to businesses across the 27 Member States.

The AI Office is uniquely equipped to support the EU’s approach to AI. It will play a key role in implementing the AI Act by supporting the governance bodies in Member States in their tasks. It will enforce the rules for general-purpose AI models. This is underpinned by the powers given to the Commission by the AI Act, including the ability to conduct evaluations of general-purpose AI models, request information and measures from model providers, and apply sanctions. The AI Office also promotes an innovative ecosystem of trustworthy AI, to reap the societal and economic benefits. It will ensure a strategic, coherent and effective European approach on AI at the international level, becoming a global reference point.

For a well-informed decision-making, the AI Office collaborates with Member States and the wider expert community through dedicated fora and expert groups. These combine knowledge from the scientific community, industry, think tanks, civil society, and the open-source ecosystem, ensuring that their views and expertise are taken into account. Grounded in comprehensive insights of the AI ecosystem, including advances in capabilities, deployment and other trends, the AI Office fosters a thorough understanding of potential benefits and risks.


In January 2024, the Commission has launched an AI innovation package to support startups and SMEs in developing trustworthy AI that complies with EU values and rules. Both the ‘GenAI4EU’ initiative and the AI office were part of this package. Together they will contribute to the development of novel use cases and emerging applications in Europe’s 14 industrial ecosystems, as well as the public sector. Application areas include robotics, health, biotech, manufacturing, mobility, climate and virtual worlds.

Tasks of the AI Office

Supporting the AI Act and enforcing general-purpose AI rules

The AI Office makes use of its expertise to support the implementation of the AI Act by:

  • Contributing to the coherent application of the AI Act across the Member States, including the set-up of advisory bodies at EU level, facilitating support and information exchange
  • Developing tools, methodologies and benchmarks for evaluating capabilities and reach of general-purpose AI models, and classifying models with systemic risks
  • Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts
  • Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action
  • Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation

Strengthening the development and use of trustworthy AI

The Commission aims to foster trustworthy AI across the internal market. The AI Office, in collaboration with relevant public and private actors and the startup community, contributes to this by:

  • Advancing actions and policies to reap the societal and economic benefits of AI across the EU
  • Providing advice on best practices and enabling ready-access to AI sandboxes, real-world testing and other European support structures for AI uptake
  • Encouraging innovative ecosystems of trustworthy AI to enhance the EU’s competitiveness and economic growth
  • Aiding the Commission in leveraging the use of transformative AI tools and reinforcing AI literacy

Fostering international cooperation

At international level, the AI Office contributes to a strategic, coherent, and effective EU approach, by:

  • Promoting the EU’s approach to trustworthy AI, including collaboration with similar institutions worldwide
  • Fostering international cooperation and governance on AI, with the aim of contributing to a global approach to AI
  • Supporting the development and implementation of international agreements on AI, including the support of Member States

To effectively carry out all tasks based on evidence and foresight, the AI Office continuously monitors the AI ecosystem, technological and market developments, but also the emergence of systemic risks and any other relevant trends.

Cooperation with institutions, experts and stakeholders

Collaboration with a diverse range of institutions, experts and stakeholders is essential for the work of the AI Office.

At an institutional level, the AI Office works closely with the European Artificial Intelligence Board formed by Member State representatives and the European Centre for Algorithmic Transparency (ECAT) of the Commission.

The Scientific Panel of independent experts ensures a strong link with the scientific community. Further technical expertise is gathered in an Advisory Forum, representing a balanced selection of stakeholders, including industry, startups and SMEs, academia, think tanks and civil society. The AI Office may also partner up with individual experts and organisations. It will also create fora for cooperation of providers of AI models and systems, including general-purpose AI, and similarly for the open-source community, to share best practices and contribute to the development of codes of conduct and codes of practice.

The AI Office will also oversee the AI Pact, which allows businesses to engage with the Commission and other stakeholders such as sharing best practices and joining activities. This engagement will start before the AI Act becomes applicable and will allow businesses to plan ahead and prepare for the implementation of the AI Act. All this will be part of the European AI Alliance, a Commission initiative, to establish an open policy dialogue on AI.

Further initiatives to foster trustworthy AI development and uptake within the EU are mapped on the Coordinated Plan on AI.

Job opportunities and collaboration

The AI Office is recruiting talents with a variety of backgrounds for policy, technical and legal work and administrative assistance. Find more information about the vacancies in the announcement. The deadline for expression of interest is 27 March 2024 at 12:00 CET. You can express your interest via the respective application form for technology specialists and administrative assistants.

Check the calls for expression of interest on EPSO website:

External experts and stakeholders will also have the chance to join dedicated fora, and to support the work of the AI Office, through a separate call for expression of interest.

You can also sign up to receive updates from the AI Office.

You can get in touch with the European AI Office through the following contact details: for general inquiries and for inquiries related to job opportunities.



Coordinated Plan on Artificial Intelligence

The Coordinated Plan on Artificial Intelligence aims to accelerate investment in AI, implement AI strategies and programmes and align AI policy to prevent fragmentation within Europe.

The Coordinated Plan on Artificial Intelligence (AI) was published in 2018. It is a joint commitment between the Commission, EU Member States, Norway and Switzerland to maximise Europe’s potential to compete globally. The initial Plan defined actions and funding instruments for the uptake and development of AI across sectors. In parallel, Member States were encouraged to develop their own national strategies.

The plan’s latest update was published in 2021. It shows Europe’s commitment to creating global leadership in trustworthy AI. The 2021 plan is also closely aligned with the Commission’s digital and green priorities, and Europe’s response to the COVID-19 pandemic.

The Coordinated Plan of 2021 aims to turn strategy into action by prompting to:

  • accelerate investments in AI technologies to drive resilient economic and social recovery, aided by the uptake of new digital solutions
  • fully and promptly implement AI strategies and programs to ensure that the EU maximizes the advantages of being an early adopter.
  • align AI policy to remove fragmentation and address global challenges.

To achieve this, the updated plan establishes four key sets of policy objectives, supported by concrete actions. It also indicates possible funding mechanism, and establishes a timeline to:

The 2024 Communication on boosting startups and innovation in trustworthy AI builds on both the 2018 and 2021 coordinated action plans on AI. This shows a policy shift to Generative AI in response to the latest technological developments. Similarly, the adopted version of the AI act also includes provisions on Generative AI. These rules expand on the Commission’s original proposal from 2021, which aimed to build a trustworthy AI ecosystem for the present and future.

The 2024 Communication proposes:

  • a strategic investment framework to leverage the EU’s assets – such as supercomputing infrastructure – to foster an innovative European AI ecosystem.
  • collaboration between startups, innovators, and industrial users, aiming to attract investments to the EU and provide access to key AI components like data, computing power, algorithms, and talent.
  • actions and investments to support startups and industries in Europe to become global leaders in trustworthy advanced AI models, systems, and applications.
  • a package of measures (Under GenAI4EU) to support European startups and SMEs in developing trustworthy AI that adheres to EU values and regulations, including respecting privacy and data protection rules.


The Commission proposed a minimum of €1 billion annual investment in AI from Horizon Europe and Digital Europe programmes which was achieved for the years of 2021 and 2022 EU funding for AI aims to draw and consolidate investments, fostering collaboration among Member States maximises its impact.

The Recovery and Resilience Facility provides an unprecedented opportunity to modernise and invest in AI. Through this the EU can become a global leader in the development and uptake of human-centric, trustworthy, secure and sustainable AI technologies. By September 2023, it had already invested 4.4 billion Euros into AI. More information can be found in the Report Mapping EU level funding instruments to Digital Decade targets.

The actions outlined in the plans have been actively implemented by both the Commission and Member States and progress was made in all chapters. Notably, the EU is fostering critical computing capacity through several successful actions:

  1. The Chips Act establishes a legislative foundation to enhance the semiconductor industry’s resilience.
  2. The Chips Joint Undertaking (Chips JU) accelerates semiconductor technologies in Europe.
  3. The EuroHPC JU develops advanced computing capabilities accessible to European SMEs.
  4. The Testing and Experimentation Facilities (TEFs) support AI technology development for Edge AI Components and Systems.
  5. The Important Projects of Common European Interest (IPCEI) promote collaboration among Member States in cutting-edge microelectronics and communication projects.

Together, these initiatives create a synergistic ecosystem for advancing microelectronics and computing capacity in Europe. The Commission is also monitoring and assessing the progress of these actions and will – in collaboration with Member States – report on the monitoring during 2024.


Member States and the Commission have collaborated closely and met regularly to work on the actions under the different plans. They progressed in all areas of the plan including by proposing a data strategysupporting small and medium-sized enterprises and creating conditions for excellence in research and development and uptake of AI in Europe.

Overall, the first two years of implementation confirmed that joint actions and structured cooperation between Member States and the Commission are key to the EU’s global competitiveness and leadership in AI development and uptake. Most Member States have adopted national AI strategies and started to implement them. Investments in AI have increased, and the EU was able to mobilise critical resources to support these processes.


Sectorial AI Testing and Experimentation Facilities under the Digital Europe Programme

To make the EU the place where AI excellence thrives from the lab to the market, the European Union is setting up world-class Testing and Experimentation Facilities (TEFs) for AI.

Together with Member States, the Commission is co-funding the TEFs to support AI developers to bring trustworthy AI to the market more efficiently, and facilitate its uptake in Europe. TEFs are specialised large-scale reference sites open to all technology providers across Europe to test and experiment at scale state-of-the art AI solutions, including both soft-and hardware products and services, e.g. robots, in real-world environments.

These large-scale reference testing and experimentation facilities will offer a combination of physical and virtual facilities, in which technology providers can get support to test their latest AI-based soft-/hardware technologies in real-world environments. This will include support for full integration, testing and experimentation of latest AI-based technologies to solve issues/improve solutions in a given application sector, including validation and demonstration.

TEFs can also contribute to the implementation of the Artificial Intelligence Act by supporting regulatory sandboxes in cooperation with competent national authorities for supervised testing and experimentation.

TEFs will be an important part of building the AI ecosystem of excellence and trust to support Europe’s strategic leadership in AI.

The Digital Europe Programme 2023-2024 is proposing a Coordination and Support action (CSA), to apply a cross-sector perspective to all existing sectorial Testing and Experimentation Facilities (TEFs). the action was launched on the 25 April. For more on the information session see our event report page.

TEF Projects

The selected TEFs projects started on January 1st 2023. They focus on the following high-impact sectors:

  • Agri-Food: project “agrifoodTEF”
  • Healthcare: project “TEF-Health”
  • Manufacturing: project “AI-MATTERS”
  • Smart Cities & Communities: project “Citcom.AI”

Co-funding between the European Commission (through the Digital Europe Programme) and the Member States will support the TEFs for five years with budgets between EUR 40-60 million per project. On 27 June,  the European Commission along with Member States and 128 partners from research, industry, and public organisations launched their investment in the four projects.

Smart cities:

Artificial Intelligence Testing and Experimentation Facilities for Smart Cities & Communities: Citcom.AI

The new EU-wide network of permanent testing and experimentation facility (TEF) for smart cities and communities will help accelerate the development of trustworthy AI in Europe by giving companies access to test and try out AI-based products in real-world conditions.

By further developing and strengthening existing infrastructures and expertise, provides reality lab-oriented conditions in test and experimental facilities, relevant for AI and robotics solutions targeting sustainable development of cities and communities. In doing so, helps European cities and communities in the transition towards a greener and more digital Europe and in maintaining and developing their resilience and competitiveness. focuses on three overarching themes:

  • POWER targets changing energy systems and reducing energy consumption.
  • MOVE targets more efficient and greener transportation linked to logistics and mobility.
  • CONNECT serves citizens through local infrastructures and cross-sector services.

These areas support AI and robot-based innovations that promote solutions organised according to the three overarching themes for use cases such as:

  • POWER:  energy such as local district heating load forecasts; environmental solutions such as adaptive street lighting; cybersecurity, ethics & edge learn.
  • MOVE: urban machine learning algorithms such as predicting pedestrian flow, smart intersection such identifying road safety concerns, electro-mobility and autonomous driving.
  • CONNECT: pollution, greenhouse gas emissions and noise management, urban development management, water and water-waste management, integrated facility management, delivery management by drones and tourism management.

Citcom.AI is organised as three “super nodes” Nordic, Central and South with satellites and sub-nodes located across 11 countries in the European Union: Denmark, Sweden, Finland, the Netherlands, Belgium, Luxembourg, France, Germany, Spain, Poland and Itay. The consortium of 36 partners is coordinated by the Technical University of Denmark. Co-funded by the Digital Europe Programme, the 5-year project started in January with an overall budget of €40 million and is expected to achieve long-term financial sustainability.

Relevant Link:

Join us in building the European way of Digital Transformation for 300 million Europeans | Living in EU (


Artificial Intelligence Testing and Experimentation Facilities for Health AI and Robotics: TEF-Health

The EU project TEF-Health is a network of real testing facilities, such as hospital platforms, both physical infrastructures and data and compute infrastructures, living labs, etc., and laboratory testing facilities that will offer to innovators to carry out tests and experiments of their AI and robotics solutions  in large-scale and sustainable real or realistic environments. The consortium is implementing evaluation activities that facilitate market access for trustworthy intelligent technologies, particularly by considering new regulatory requirements (certification, standardization, code of conduct, etc.). TEF- Health will ensure easy access to these evaluation resources (link with digital innovation hubs, etc.).

In doing so, TEF-Health contributes to increasing effectiveness, resilience, sustainability of EU health and care systemsreduce healthcare delivery inequalities in EU; and ensure compliance with legal, ethical, quality and interoperability standards.

A key component of an agile certification process are regulatory sandboxes where all relevant stakeholders can work together to create innovative testing and validation tools for trustworthy AI in medical devices for specific use-cases.

The use-cases are defined in four domains: 1) Neurotec, 2) Cancer, 3) CardioVascular and 4) Intensive Care.

The consortium comprises seven nodes in Germany, France, Sweden, Belgium, Portugal, Slovakia, Italy, two associated nodes in Finland and Czechia and the pan-EU structures EBRAINS AISBL, EITHealth and EHDS2 Pilot initiative. The consortium of 51 partners is coordinated by Charité – Universitätsmedizin Berlin. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.

Relevant links:

European Cancer Imaging Initiative | Shaping Europe’s digital future (

A cancer plan for Europe (

European Health Data Space (

European data strategy (


Artificial Intelligence Testing and Experimentation Facilities for Agrifood Innovation: AgrifoodTEF

Built as a network of physical and digital facilities across Europe, the EU project agrifoodTEF provides services that help assess and validate third party AI and Robotics solutions in real-world conditions aiming to foster sustainable and efficient food production. AgrifoodTEF offers validation tools to innovators so they can develop their ideas into market products and services.

There are five impact sectors: arable farming (performance enhancement of autonomous driving vehicles), tree corps (optimisation of natural resources and inputs for Mediterranean crops), horticulture (finding the right nutrient balance as well as crop and yield quality), livestock farming (improvement of  sustainability in cow, pig and paltry farming) and food processing (traceability of production and supply chains).

The use cases include quality crops, agro-machinery, AI conformity assessment, agro ecology in controlled environments, co-creation in agrifood production, HPC for agrifood, AI for arable and farmland machinery, and new frontiers for sustainable farming in the North.

AgrifoodTEF is organised in national nodes (Italy, Germany and France) and satellite nodes (Poland, the Netherlands, Belgium, Sweden and Austria). The consortium of 29 partners is coordinated by Fondazione Bruno Kessler. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.

Relevant links:

Fark to Fork Strategy


Artificial Intelligence Testing and Experimentation Facilities for Manufacturing Innovation: AI-Matters

The AI-MATTERS project is building a network of physical and digital facilities across Europe where innovators can validate their solutions under real-life conditions. The EU-project contributes to increasing the resilience and the flexibility of the European manufacturing sector through the deployment of the latest developments in AI, robotics, smart and autonomous systems.

AI-MATTERS will provide an extensive catalogue of services to innovators in the following key topics: factory-level optimization, human-robot interaction, circular economy and adoption of emerging AI enabling technologies. All consortium members bring their expertise in manufacturing for different sectors such as automotive, space and mobility, textile, recycling, etc.

The AI-Matters network will provide testing and experimentation facilities from companies across Europe at eight locations in Denmark, France, Germany, Greece, Italy, the Netherlands, Spain and the Czech Republic. The consortium of 25 partners is coordinated by CEA-List. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.


Commission launches AI innovation package to support Artificial Intelligence startups and SMEs

The Commission has launched this week a package of measures to support European startups and SMEs in the development of trustworthy Artificial Intelligence (AI) that respects EU values and rules.

This follows the political agreement reached in December 2023 on the EU AI Act – the world’s first comprehensive law on Artificial Intelligence – which will support the development, deployment and take-up of trustworthy AI in the EU.

In her 2023 State of the Union address, President von der Leyen announced a new initiative to make Europe’s supercomputers available to innovative European AI startups to train their trustworthy AI models. As a first step, the Commission launched in November 2023 the Large AI Grand Challenge, a prize giving AI startups financial support and supercomputing access. This package puts that commitment into practice through a broad range of measures to support AI startups and innovation, including a proposal to provide privileged access to supercomputers to AI startups and the broader innovation community.

Read more about: