Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could

“The EU’s AI Act needs to be closely monitored to avoid overburdening innovative AI developers… with unnecessary red tape,” one industry body warns.


Europe’s growing generative artificial intelligence (GenAI) landscape is highly competitive, but additional regulation could stifle innovation, a new study has found.

The report by Copenhagen Economics said that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The study comes as EU regulators try to tighten rules on regulating competition in the AI market with its Digital Markets Act, EU AI Act, and AI office.

The Computer & Communications Industry Association (CCIA Europe), which commissioned the independent report, warned that regulatory intervention would be premature, slow down innovation and growth, and reduce consumer choice in generative AI.

“Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants,” said Aleksandra Zuchowska, CCIA Europe’s Competition Policy Manager.

“Instead, the impact of new AI-specific rules, such as the EU’s recently adopted AI Act, needs to be closely monitored to avoid overburdening innovative AI developers with disproportionate compliance costs and unnecessary red tape”.

The authors of the study noted that there are a growing number of foundation model developers active in the EU, such as Mistral AI and Aleph Alph and recognised that Europe’s GenAI sector is vibrant.

Competition concerns

But while they said there were no competition concerns in the short-term, there may be some in the near future, which include uncertainty for GenAI start-ups as they face challenges in growing and regulatory costs, such as from the EU AI Act.

The study also warned there are potential competition concerns, which include limited access to data, partnerships between large companies and smaller ones, and leveraging behaviour by big companies.

France’s Mistral AI is a prime example of a start-up that signed a partnership with Big Tech after it made its large language model (LLM) available to Microsoft Azure customers in February and gave Microsoft a minor stake in the AI company.

The study noted that if a larger partner uses its market power to exercise decisive control over a start-up or gain privileged or exclusive access to its technology, it could harm competition.

But it said partnerships are less likely to create competition concerns if there are no or limited exclusivity conditions and limited privileged access to the startup’s valuable technological assets.


Draft UN resolution on AI aims to make it ‘safe and trustworthy’

A new draft resolution aims to close the digital divide on artificial intelligence.

The United States is spearheading the first United Nations resolution on artificial intelligence (AI), aimed at ensuring the new technology is “safe, secure and trustworthy” and that all countries, especially those in the developing world, have equal access.

The draft General Assembly resolution aims to close the digital divide between countries and make sure they are all at the table in discussions on AI — and that they have the technology and capabilities to take advantage of its benefits, including detecting diseases, predicting floods and training the next generation of workers.

The draft recognises the rapid acceleration of AI development and use and stresses “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems”.

It also recognises that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches.

Fostering ‘safe and trustworthy’ AI

US National Security Advisor Jake Sullivan said the United States turned to the General Assembly “to have a truly global conversation on how to manage the implications of the fast-advancing technology of AI”.

The resolution “would represent global support for a baseline set of principles for the development and use of AI and would lay out a path to leverage AI systems for good while managing the risks,” he said in a statement to The Associated Press.

If approved, Sullivan said, “this resolution will be an historic step forward in fostering safe, secure and trustworthy AI worldwide.”

The United States began negotiating with the 193 UN member nations about three months ago, spent hundreds of hours in direct talks with individual countries, 42 hours in negotiations and accepted input from 120 nations, a senior US official said.

The resolution went through several drafts and achieved consensus support from all member states this week and will be formally considered later this month, the official said, speaking on condition of anonymity because he was not authorised to speak publicly.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are an important barometre of world opinion.

A key goal, according to the draft resolution, is to use AI to help spur progress toward achieving the UN’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The draft resolution encourages all countries, regional and international organisations, technical communities, civil society, the media, academia, research institutions and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.

It warns against “improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law”.

New AI regulations

Lawmakers in the European Union are set to give final approval to the world’s first comprehensive AI rules on Wednesday. Countries around the world, including the US and China, or global groupings like the Group of 20 industrialised nations also are moving to draw up AI regulations.

The US draft calls on the 193 UN member states and others to assist developing countries in accessing the benefits of digital transformation and safe AI systems. It “emphasises that human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of artificial intelligence systems.”

US Ambassador Linda Thomas-Greenfield recalled President Joe Biden’s address to the General Assembly last year where he said emerging technologies, including AI, hold enormous potential.

She said the resolution, which is co-sponsored by dozens of countries, “aims to build international consensus on a shared approach to the design, development, deployment and use of AI systems,” particularly to support the 2030 UN goals.

The resolution responds to “the profound implications of this technology,” Thomas-Greenfield said, and if adopted it will be “an historic step forward in fostering safe, security and trustworthy AI worldwide.”


EU Policy. Lawmakers approve AI Act with overwhelming majority

Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI systems according to a risk-based approach.

Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI according to a risk-based approach with an overwhelming majority. The law passed with 523 votes in favour, 46 against and 49 abstentions.

The act, which needed final endorsement after approval on political and technical level, will now most likely enter into force this May.

Parliament AI Act co-lead, Italian lawmaker Brando Benifei (S&D), described it as “a historic day” in a subsequent press conference.

“We have the first regulation in the world which puts a clear path for a safe and human centric development of AI. We have now got a text that reflects the parliament’s priorities,” he said.

“The main point now will be implementation and compliance of businesses and institutions. We are also working on further legislation for the next mandate such as a directive on conditions in the workplace and AI,” Benifei said.

His counterpart Dragoş Tudorache (Romania/Renew), told the same conference that the EU looks at partner countries to ensure a global impact of the rules. “We have to be open to work with others on how to promote these rules, and build a governance with like-minded parties,” he said.

Entry into force

Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.

The general-purpose AI rules will apply one year after entry into force, in May 2025, and the obligations for high-risk systems in three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission. It will now be up to the member states to set up national oversight agencies. The commission told Euronews that countries have 12 months to nominate these watchdogs.

In a response to today’s vote, Cecilia Bonefeld-Dahl, head of EU trade organisation Digital Europe, said that more needs to be done to keep companies based in Europe.

“Today, only 3% of the world’s AI unicorns come from the EU, with about 14 times more private investment in AI in the US and five times more in China. By 2030, the global AI market is expected to reach $1.5 trillion, and we need to ensure that European companies tap into that without getting tangled up in red tape,” Bonefeld-Dahl said.

Ursula Pachl, Deputy Director General of the European Consumer Organisation (BEUC), welcomed the approval of the law and said it will help consumers to join collective redress claims if they have been harmed by the same AI system.

“Although the legislation should have gone further to protect consumers, the top priority for the European Commission and national governments should now be to show they are serious about the AI Act by implementing it without delay and providing the relevant regulators that will enforce it with the necessary resources,” Pachl said.


EU Policy. As the EU AI Act enters into force, focus shifts to countries’ oversight appointments

Member states have 12 months to nominate national competent authorities tasked with compliance.

Europe’s AI Act, the world’s first attempt to regulate AI systems according to a risk-based approach, will get the final nod in the European Parliament tomorrow (13 March), paving the way for the rules to finally enter into force. However, when it comes to oversight; member states are still in the early stages of determining which regulator is best placed to oversee compliance.

Tomorrow, the lawmakers will vote on the text without the minor linguistic changes made by the lawyers during the translation phase. These also need to be formally approved, either by a separate vote in the April plenary or by a formal announcement. The rules will then be published in the EU official journal which is likely to happen in May.

Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.

In November, bans on prohibited practises specified in the AI Act will apply. The general-purpose AI rules will apply one year after entry into force, May 2025, and the obligations for high-risk systems in three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission.

A European Commission spokesperson told Euronews that the countries have 12 months to nominate relevant national competent authorities, and the commission “awaits notification of these appointments in due course”.


Spain was the first EU country to set up an Agency for the Supervision of Artificial Intelligence (AESIA) in A Coruña in 2023. In other countries such as the Netherlands, the data protection authority set up a department dealing with algorithms last year. The office currently has 12 employees and the expectation is that it will grow to 20 people this year.

In the case of Ireland, the Department of Enterprise, Trade and Employment will lead the development of the national implementation plan for the AI Act. A spokesperson told Euronews however that “as the EU legislative process has not yet completed, and the Act has not yet been adopted, it would be premature to speculate about the national arrangements for enforcement.”

The Luxembourg Department of Media, Telecommunications and Digital Policy said that the country is “working hard to implement the AI Act”.

“We are consulting with all the relevant stakeholders, first and foremost the existing authorities and regulators that will have a role to play in the new governance framework. An important aspect for us is an efficient coordination between regulators, as we want to make it as clear as possible for businesses and citizens to interact with the new legislation,” a spokesperson said.

Meanwhile, the commission also began its recruitment process for policy and technical jobs at the AI Office, which will help with delivering a harmonised approach of the rules across the member states via information exchanges. The deadline for applications is 27 March.

Trade organisations have warned about the lack of implementation and enforcement now that the rules are on the brink of entering into force. CCIA Europe, which represents Big Tech companies, previously warned that the implementation will be crucial “to not overburden companies” that try to innovate.

Digital Europe, representing both tech companies and national trade associations, called in the AI Act trilogue negotiations for a 48-month transitional period to ensure the whole ecosystem is ready, as well as for the timely availability of harmonised standards for industry to comply with the rules.


European AI Office

The European AI Office will be the centre of AI expertise across the EU. It will play a key role in implementing the AI Act – especially for general-purpose AI – foster the development and use of trustworthy AI, and international cooperation.

The European AI Office will support the development and use of trustworthy AI, while protecting against AI risks. The AI Office was established within the European Commission as the centre of AI expertise and forms the foundation for a single European AI governance system.

The EU aims to ensure that AI is safe and trustworthy. For this purpose, the AI Act is the first-ever comprehensive legal framework on AI worldwide, guaranteeing the health, safety and fundamental rights of people, and providing legal certainty to businesses across the 27 Member States.

The AI Office is uniquely equipped to support the EU’s approach to AI. It will play a key role in implementing the AI Act by supporting the governance bodies in Member States in their tasks. It will enforce the rules for general-purpose AI models. This is underpinned by the powers given to the Commission by the AI Act, including the ability to conduct evaluations of general-purpose AI models, request information and measures from model providers, and apply sanctions. The AI Office also promotes an innovative ecosystem of trustworthy AI, to reap the societal and economic benefits. It will ensure a strategic, coherent and effective European approach on AI at the international level, becoming a global reference point.

For a well-informed decision-making, the AI Office collaborates with Member States and the wider expert community through dedicated fora and expert groups. These combine knowledge from the scientific community, industry, think tanks, civil society, and the open-source ecosystem, ensuring that their views and expertise are taken into account. Grounded in comprehensive insights of the AI ecosystem, including advances in capabilities, deployment and other trends, the AI Office fosters a thorough understanding of potential benefits and risks.


In January 2024, the Commission has launched an AI innovation package to support startups and SMEs in developing trustworthy AI that complies with EU values and rules. Both the ‘GenAI4EU’ initiative and the AI office were part of this package. Together they will contribute to the development of novel use cases and emerging applications in Europe’s 14 industrial ecosystems, as well as the public sector. Application areas include robotics, health, biotech, manufacturing, mobility, climate and virtual worlds.

Tasks of the AI Office

Supporting the AI Act and enforcing general-purpose AI rules

The AI Office makes use of its expertise to support the implementation of the AI Act by:

  • Contributing to the coherent application of the AI Act across the Member States, including the set-up of advisory bodies at EU level, facilitating support and information exchange
  • Developing tools, methodologies and benchmarks for evaluating capabilities and reach of general-purpose AI models, and classifying models with systemic risks
  • Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts
  • Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action
  • Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation

Strengthening the development and use of trustworthy AI

The Commission aims to foster trustworthy AI across the internal market. The AI Office, in collaboration with relevant public and private actors and the startup community, contributes to this by:

  • Advancing actions and policies to reap the societal and economic benefits of AI across the EU
  • Providing advice on best practices and enabling ready-access to AI sandboxes, real-world testing and other European support structures for AI uptake
  • Encouraging innovative ecosystems of trustworthy AI to enhance the EU’s competitiveness and economic growth
  • Aiding the Commission in leveraging the use of transformative AI tools and reinforcing AI literacy

Fostering international cooperation

At international level, the AI Office contributes to a strategic, coherent, and effective EU approach, by:

  • Promoting the EU’s approach to trustworthy AI, including collaboration with similar institutions worldwide
  • Fostering international cooperation and governance on AI, with the aim of contributing to a global approach to AI
  • Supporting the development and implementation of international agreements on AI, including the support of Member States

To effectively carry out all tasks based on evidence and foresight, the AI Office continuously monitors the AI ecosystem, technological and market developments, but also the emergence of systemic risks and any other relevant trends.

Cooperation with institutions, experts and stakeholders

Collaboration with a diverse range of institutions, experts and stakeholders is essential for the work of the AI Office.

At an institutional level, the AI Office works closely with the European Artificial Intelligence Board formed by Member State representatives and the European Centre for Algorithmic Transparency (ECAT) of the Commission.

The Scientific Panel of independent experts ensures a strong link with the scientific community. Further technical expertise is gathered in an Advisory Forum, representing a balanced selection of stakeholders, including industry, startups and SMEs, academia, think tanks and civil society. The AI Office may also partner up with individual experts and organisations. It will also create fora for cooperation of providers of AI models and systems, including general-purpose AI, and similarly for the open-source community, to share best practices and contribute to the development of codes of conduct and codes of practice.

The AI Office will also oversee the AI Pact, which allows businesses to engage with the Commission and other stakeholders such as sharing best practices and joining activities. This engagement will start before the AI Act becomes applicable and will allow businesses to plan ahead and prepare for the implementation of the AI Act. All this will be part of the European AI Alliance, a Commission initiative, to establish an open policy dialogue on AI.

Further initiatives to foster trustworthy AI development and uptake within the EU are mapped on the Coordinated Plan on AI.

Job opportunities and collaboration

The AI Office is recruiting talents with a variety of backgrounds for policy, technical and legal work and administrative assistance. Find more information about the vacancies in the announcement. The deadline for expression of interest is 27 March 2024 at 12:00 CET. You can express your interest via the respective application form for technology specialists and administrative assistants.

Check the calls for expression of interest on EPSO website:

External experts and stakeholders will also have the chance to join dedicated fora, and to support the work of the AI Office, through a separate call for expression of interest.

You can also sign up to receive updates from the AI Office.

You can get in touch with the European AI Office through the following contact details: for general inquiries and for inquiries related to job opportunities.



Coordinated Plan on Artificial Intelligence

The Coordinated Plan on Artificial Intelligence aims to accelerate investment in AI, implement AI strategies and programmes and align AI policy to prevent fragmentation within Europe.

The Coordinated Plan on Artificial Intelligence (AI) was published in 2018. It is a joint commitment between the Commission, EU Member States, Norway and Switzerland to maximise Europe’s potential to compete globally. The initial Plan defined actions and funding instruments for the uptake and development of AI across sectors. In parallel, Member States were encouraged to develop their own national strategies.

The plan’s latest update was published in 2021. It shows Europe’s commitment to creating global leadership in trustworthy AI. The 2021 plan is also closely aligned with the Commission’s digital and green priorities, and Europe’s response to the COVID-19 pandemic.

The Coordinated Plan of 2021 aims to turn strategy into action by prompting to:

  • accelerate investments in AI technologies to drive resilient economic and social recovery, aided by the uptake of new digital solutions
  • fully and promptly implement AI strategies and programs to ensure that the EU maximizes the advantages of being an early adopter.
  • align AI policy to remove fragmentation and address global challenges.

To achieve this, the updated plan establishes four key sets of policy objectives, supported by concrete actions. It also indicates possible funding mechanism, and establishes a timeline to:

The 2024 Communication on boosting startups and innovation in trustworthy AI builds on both the 2018 and 2021 coordinated action plans on AI. This shows a policy shift to Generative AI in response to the latest technological developments. Similarly, the adopted version of the AI act also includes provisions on Generative AI. These rules expand on the Commission’s original proposal from 2021, which aimed to build a trustworthy AI ecosystem for the present and future.

The 2024 Communication proposes:

  • a strategic investment framework to leverage the EU’s assets – such as supercomputing infrastructure – to foster an innovative European AI ecosystem.
  • collaboration between startups, innovators, and industrial users, aiming to attract investments to the EU and provide access to key AI components like data, computing power, algorithms, and talent.
  • actions and investments to support startups and industries in Europe to become global leaders in trustworthy advanced AI models, systems, and applications.
  • a package of measures (Under GenAI4EU) to support European startups and SMEs in developing trustworthy AI that adheres to EU values and regulations, including respecting privacy and data protection rules.


The Commission proposed a minimum of €1 billion annual investment in AI from Horizon Europe and Digital Europe programmes which was achieved for the years of 2021 and 2022 EU funding for AI aims to draw and consolidate investments, fostering collaboration among Member States maximises its impact.

The Recovery and Resilience Facility provides an unprecedented opportunity to modernise and invest in AI. Through this the EU can become a global leader in the development and uptake of human-centric, trustworthy, secure and sustainable AI technologies. By September 2023, it had already invested 4.4 billion Euros into AI. More information can be found in the Report Mapping EU level funding instruments to Digital Decade targets.

The actions outlined in the plans have been actively implemented by both the Commission and Member States and progress was made in all chapters. Notably, the EU is fostering critical computing capacity through several successful actions:

  1. The Chips Act establishes a legislative foundation to enhance the semiconductor industry’s resilience.
  2. The Chips Joint Undertaking (Chips JU) accelerates semiconductor technologies in Europe.
  3. The EuroHPC JU develops advanced computing capabilities accessible to European SMEs.
  4. The Testing and Experimentation Facilities (TEFs) support AI technology development for Edge AI Components and Systems.
  5. The Important Projects of Common European Interest (IPCEI) promote collaboration among Member States in cutting-edge microelectronics and communication projects.

Together, these initiatives create a synergistic ecosystem for advancing microelectronics and computing capacity in Europe. The Commission is also monitoring and assessing the progress of these actions and will – in collaboration with Member States – report on the monitoring during 2024.


Member States and the Commission have collaborated closely and met regularly to work on the actions under the different plans. They progressed in all areas of the plan including by proposing a data strategysupporting small and medium-sized enterprises and creating conditions for excellence in research and development and uptake of AI in Europe.

Overall, the first two years of implementation confirmed that joint actions and structured cooperation between Member States and the Commission are key to the EU’s global competitiveness and leadership in AI development and uptake. Most Member States have adopted national AI strategies and started to implement them. Investments in AI have increased, and the EU was able to mobilise critical resources to support these processes.


Sectorial AI Testing and Experimentation Facilities under the Digital Europe Programme

To make the EU the place where AI excellence thrives from the lab to the market, the European Union is setting up world-class Testing and Experimentation Facilities (TEFs) for AI.

Together with Member States, the Commission is co-funding the TEFs to support AI developers to bring trustworthy AI to the market more efficiently, and facilitate its uptake in Europe. TEFs are specialised large-scale reference sites open to all technology providers across Europe to test and experiment at scale state-of-the art AI solutions, including both soft-and hardware products and services, e.g. robots, in real-world environments.

These large-scale reference testing and experimentation facilities will offer a combination of physical and virtual facilities, in which technology providers can get support to test their latest AI-based soft-/hardware technologies in real-world environments. This will include support for full integration, testing and experimentation of latest AI-based technologies to solve issues/improve solutions in a given application sector, including validation and demonstration.

TEFs can also contribute to the implementation of the Artificial Intelligence Act by supporting regulatory sandboxes in cooperation with competent national authorities for supervised testing and experimentation.

TEFs will be an important part of building the AI ecosystem of excellence and trust to support Europe’s strategic leadership in AI.

The Digital Europe Programme 2023-2024 is proposing a Coordination and Support action (CSA), to apply a cross-sector perspective to all existing sectorial Testing and Experimentation Facilities (TEFs). the action was launched on the 25 April. For more on the information session see our event report page.

TEF Projects

The selected TEFs projects started on January 1st 2023. They focus on the following high-impact sectors:

  • Agri-Food: project “agrifoodTEF”
  • Healthcare: project “TEF-Health”
  • Manufacturing: project “AI-MATTERS”
  • Smart Cities & Communities: project “Citcom.AI”

Co-funding between the European Commission (through the Digital Europe Programme) and the Member States will support the TEFs for five years with budgets between EUR 40-60 million per project. On 27 June,  the European Commission along with Member States and 128 partners from research, industry, and public organisations launched their investment in the four projects.

Smart cities:

Artificial Intelligence Testing and Experimentation Facilities for Smart Cities & Communities: Citcom.AI

The new EU-wide network of permanent testing and experimentation facility (TEF) for smart cities and communities will help accelerate the development of trustworthy AI in Europe by giving companies access to test and try out AI-based products in real-world conditions.

By further developing and strengthening existing infrastructures and expertise, provides reality lab-oriented conditions in test and experimental facilities, relevant for AI and robotics solutions targeting sustainable development of cities and communities. In doing so, helps European cities and communities in the transition towards a greener and more digital Europe and in maintaining and developing their resilience and competitiveness. focuses on three overarching themes:

  • POWER targets changing energy systems and reducing energy consumption.
  • MOVE targets more efficient and greener transportation linked to logistics and mobility.
  • CONNECT serves citizens through local infrastructures and cross-sector services.

These areas support AI and robot-based innovations that promote solutions organised according to the three overarching themes for use cases such as:

  • POWER:  energy such as local district heating load forecasts; environmental solutions such as adaptive street lighting; cybersecurity, ethics & edge learn.
  • MOVE: urban machine learning algorithms such as predicting pedestrian flow, smart intersection such identifying road safety concerns, electro-mobility and autonomous driving.
  • CONNECT: pollution, greenhouse gas emissions and noise management, urban development management, water and water-waste management, integrated facility management, delivery management by drones and tourism management.

Citcom.AI is organised as three “super nodes” Nordic, Central and South with satellites and sub-nodes located across 11 countries in the European Union: Denmark, Sweden, Finland, the Netherlands, Belgium, Luxembourg, France, Germany, Spain, Poland and Itay. The consortium of 36 partners is coordinated by the Technical University of Denmark. Co-funded by the Digital Europe Programme, the 5-year project started in January with an overall budget of €40 million and is expected to achieve long-term financial sustainability.

Relevant Link:

Join us in building the European way of Digital Transformation for 300 million Europeans | Living in EU (


Artificial Intelligence Testing and Experimentation Facilities for Health AI and Robotics: TEF-Health

The EU project TEF-Health is a network of real testing facilities, such as hospital platforms, both physical infrastructures and data and compute infrastructures, living labs, etc., and laboratory testing facilities that will offer to innovators to carry out tests and experiments of their AI and robotics solutions  in large-scale and sustainable real or realistic environments. The consortium is implementing evaluation activities that facilitate market access for trustworthy intelligent technologies, particularly by considering new regulatory requirements (certification, standardization, code of conduct, etc.). TEF- Health will ensure easy access to these evaluation resources (link with digital innovation hubs, etc.).

In doing so, TEF-Health contributes to increasing effectiveness, resilience, sustainability of EU health and care systemsreduce healthcare delivery inequalities in EU; and ensure compliance with legal, ethical, quality and interoperability standards.

A key component of an agile certification process are regulatory sandboxes where all relevant stakeholders can work together to create innovative testing and validation tools for trustworthy AI in medical devices for specific use-cases.

The use-cases are defined in four domains: 1) Neurotec, 2) Cancer, 3) CardioVascular and 4) Intensive Care.

The consortium comprises seven nodes in Germany, France, Sweden, Belgium, Portugal, Slovakia, Italy, two associated nodes in Finland and Czechia and the pan-EU structures EBRAINS AISBL, EITHealth and EHDS2 Pilot initiative. The consortium of 51 partners is coordinated by Charité – Universitätsmedizin Berlin. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.

Relevant links:

European Cancer Imaging Initiative | Shaping Europe’s digital future (

A cancer plan for Europe (

European Health Data Space (

European data strategy (


Artificial Intelligence Testing and Experimentation Facilities for Agrifood Innovation: AgrifoodTEF

Built as a network of physical and digital facilities across Europe, the EU project agrifoodTEF provides services that help assess and validate third party AI and Robotics solutions in real-world conditions aiming to foster sustainable and efficient food production. AgrifoodTEF offers validation tools to innovators so they can develop their ideas into market products and services.

There are five impact sectors: arable farming (performance enhancement of autonomous driving vehicles), tree corps (optimisation of natural resources and inputs for Mediterranean crops), horticulture (finding the right nutrient balance as well as crop and yield quality), livestock farming (improvement of  sustainability in cow, pig and paltry farming) and food processing (traceability of production and supply chains).

The use cases include quality crops, agro-machinery, AI conformity assessment, agro ecology in controlled environments, co-creation in agrifood production, HPC for agrifood, AI for arable and farmland machinery, and new frontiers for sustainable farming in the North.

AgrifoodTEF is organised in national nodes (Italy, Germany and France) and satellite nodes (Poland, the Netherlands, Belgium, Sweden and Austria). The consortium of 29 partners is coordinated by Fondazione Bruno Kessler. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.

Relevant links:

Fark to Fork Strategy


Artificial Intelligence Testing and Experimentation Facilities for Manufacturing Innovation: AI-Matters

The AI-MATTERS project is building a network of physical and digital facilities across Europe where innovators can validate their solutions under real-life conditions. The EU-project contributes to increasing the resilience and the flexibility of the European manufacturing sector through the deployment of the latest developments in AI, robotics, smart and autonomous systems.

AI-MATTERS will provide an extensive catalogue of services to innovators in the following key topics: factory-level optimization, human-robot interaction, circular economy and adoption of emerging AI enabling technologies. All consortium members bring their expertise in manufacturing for different sectors such as automotive, space and mobility, textile, recycling, etc.

The AI-Matters network will provide testing and experimentation facilities from companies across Europe at eight locations in Denmark, France, Germany, Greece, Italy, the Netherlands, Spain and the Czech Republic. The consortium of 25 partners is coordinated by CEA-List. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.


Commission launches AI innovation package to support Artificial Intelligence startups and SMEs

The Commission has launched this week a package of measures to support European startups and SMEs in the development of trustworthy Artificial Intelligence (AI) that respects EU values and rules.

This follows the political agreement reached in December 2023 on the EU AI Act – the world’s first comprehensive law on Artificial Intelligence – which will support the development, deployment and take-up of trustworthy AI in the EU.

In her 2023 State of the Union address, President von der Leyen announced a new initiative to make Europe’s supercomputers available to innovative European AI startups to train their trustworthy AI models. As a first step, the Commission launched in November 2023 the Large AI Grand Challenge, a prize giving AI startups financial support and supercomputing access. This package puts that commitment into practice through a broad range of measures to support AI startups and innovation, including a proposal to provide privileged access to supercomputers to AI startups and the broader innovation community.

Read more about:


Commission opens access to EU supercomputers to speed up artificial intelligence development

The Commission and the European High-Performance Computing Joint Undertaking (EuroHPC JU) have committed to open and widen access to the EU’s world-class supercomputing resources for European artificial intelligence (AI) start-ups, SMEs and the broader AI community as part of the EU AI Start-Up Initiative.

To support the further development and scalability of AI models, access to world-class supercomputers that accelerate AI training and testing is crucial, reducing training time from months or years to a matter of weeks.

The statement was made in the context of the fourth AI Alliance Assembly in Madrid and follows an announcement by President von der Leyen in her 2023 State of the Union addressEuropean AI and high-performance computing (HPC) actors will closely cooperate to drive breakthrough innovation and enhance the competitiveness of the European AI industrial ecosystem. This will accelerate the development of AI and position the European Union as a global competitive leader.

Full press release

European Commission High-Performance Computing

The European High-Performance Computing Joint Undertaking

A European Approach to Artificial Intelligence

The European AI Alliance


A European approach to artificial intelligence

The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

The way we approach Artificial Intelligence (AI) will define the world we live in the future. To help building a resilient Europe for the Digital Decade, people and businesses should be able to enjoy the benefits of AI while feeling safe and protected.

The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into the European approach to excellence and trust through concrete rules and actions.

In April 2021, the Commission presented its AI package, including:

A European approach to excellence in AI

Fostering excellence in AI will strengthen Europe’s potential to compete globally.

The EU will achieve this by:

  1. enabling the development and uptake of AI in the EU;
  2. making the EU the place where AI thrives from the lab to the market;
  3. ensuring that AI works for people and is a force for good in society;
  4. building strategic leadership in high-impact sectors.

The Commission and Member States agreed to boost excellence in AI by joining forces on policy and investments. The 2021 review of the Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.

Maximising resources and coordinating investments is a critical component of AI excellence. Through the Horizon Europe and Digital Europe programmes, the Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade.

The Recovery and Resilience Facility makes €134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.

A European approach to trust in AI

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  1. European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;
  2. civil liability framework – adapting liability rules to the digital age and AI;
  3. a revision of sectoral safety legislation (e.g. Machinery RegulationGeneral Product Safety Directive).

European proposal for a legal framework on AI

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

Important milestones

  1. Source: