Europe regulates AI in order to boosts investment in innovation and deep technologies: European Innovation Council to invest €1.4 billion in deep technologies in 2025
Next year, the European Innovation Council (EIC) will boost deep technologies and high-potential start-ups from the EU with €1.4 billion. This is set out in the EIC Work Programme for 2025. The increase is €200 million compared to 2024 and aims to boost a more sustainable innovation ecosystem in Europe.
One of the main improvements to the programme is the EIC’s new scheme to expand the Strategic Technology Platform for Europe (STEP) – its budget is €300 million and will finance larger investments in companies aiming to bring strategic technologies to the EU market.
The remaining budget is distributed across 4 funding schemes:
EIC Pathfinder – for technology solutions with a technology readiness level of up to TRL 4 with the potential to lead to technological breakthroughs.
EIC Transition – an opportunity for consortia that have already achieved research results within the EIC Pathfinder or other Horizon 2020 and Horizon Europe programmes to turn them into innovations ready for market implementation.
EIC Accelerator – support for innovation projects in the final development phase.
The individual programmes are eligible for funding by research organisations, universities, SMEs, start-ups, manufacturing companies or sole traders, large companies, small mid-caps, etc.
https://ai-advy.com/wp-content/uploads/2024/11/AI-image.jpg7651344Ralitsa Hristovahttps://ai-advy.com/wp-content/uploads/2024/10/AiADVY-official-loog-mono-long-no-slogan@10x-300x76.pngRalitsa Hristova2024-11-15 23:59:002024-11-15 23:59:00Europe regulates AI in order to boost investment in innovation and deep technologies
Our CEO Dr. Galya Mancheva has provided today an UpDate on EuAiAct on Bloomberg TV.
Some of the insights are the following:
The ban of AI systems posing unacceptable risks will apply six months after the entry into force.
Codes of practice will apply nine months after entry into force.
Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force
High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.
https://ai-advy.com/wp-content/uploads/2024/11/1731342150645.jpg15362048Ralitsa Hristovahttps://ai-advy.com/wp-content/uploads/2024/10/AiADVY-official-loog-mono-long-no-slogan@10x-300x76.pngRalitsa Hristova2024-11-11 23:10:052024-11-11 23:10:05Our CEO Dr. Galya Mancheva has provided an UpDate on EU AI Act on Bloomberg TV
“The EU’s AI Act needs to be closely monitored to avoid overburdening innovative AI developers… with unnecessary red tape,” one industry body warns.
Europe’s growing generative artificial intelligence (GenAI) landscape is highly competitive, but additional regulation could stifle innovation, a new study has found.
The report by Copenhagen Economics said that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The study comes as EU regulators try to tighten rules on regulating competition in the AI market with its Digital Markets Act, EU AI Act, and AI office.
The Computer & Communications Industry Association (CCIA Europe), which commissioned the independent report, warned that regulatory intervention would be premature, slow down innovation and growth, and reduce consumer choice in generative AI.
“Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants,” said Aleksandra Zuchowska, CCIA Europe’s Competition Policy Manager.
“Instead, the impact of new AI-specific rules, such as the EU’s recently adopted AI Act, needs to be closely monitored to avoid overburdening innovative AI developers with disproportionate compliance costs and unnecessary red tape”.
The authors of the study noted that there are a growing number of foundation model developers active in the EU, such as Mistral AI and Aleph Alph and recognised that Europe’s GenAI sector is vibrant.
Competition concerns
But while they said there were no competition concerns in the short-term, there may be some in the near future, which include uncertainty for GenAI start-ups as they face challenges in growing and regulatory costs, such as from the EU AI Act.
The study also warned there are potential competition concerns, which include limited access to data, partnerships between large companies and smaller ones, and leveraging behaviour by big companies.
France’s Mistral AI is a prime example of a start-up that signed a partnership with Big Tech after it made its large language model (LLM) available to Microsoft Azure customers in February and gave Microsoft a minor stake in the AI company.
The study noted that if a larger partner uses its market power to exercise decisive control over a start-up or gain privileged or exclusive access to its technology, it could harm competition.
But it said partnerships are less likely to create competition concerns if there are no or limited exclusivity conditions and limited privileged access to the startup’s valuable technological assets.
https://ai-advy.com/wp-content/uploads/2024/04/1920x1080_cmsv2_ce826121-aca5-50aa-95c1-23d1b634944b-8320970.webp10801920dr. Galia Manchevahttps://ai-advy.com/wp-content/uploads/2024/10/AiADVY-official-loog-mono-long-no-slogan@10x-300x76.pngdr. Galia Mancheva2024-03-22 21:13:492024-10-31 13:36:26Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could
A new draft resolution aims to close the digital divide on artificial intelligence.
The United States is spearheading the first United Nations resolution on artificial intelligence (AI), aimed at ensuring the new technology is “safe, secure and trustworthy” and that all countries, especially those in the developing world, have equal access.
The draft General Assembly resolution aims to close the digital divide between countries and make sure they are all at the table in discussions on AI — and that they have the technology and capabilities to take advantage of its benefits, including detecting diseases, predicting floods and training the next generation of workers.
The draft recognises the rapid acceleration of AI development and use and stresses “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems”.
It also recognises that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches.
Fostering ‘safe and trustworthy’ AI
US National Security Advisor Jake Sullivan said the United States turned to the General Assembly “to have a truly global conversation on how to manage the implications of the fast-advancing technology of AI”.
The resolution “would represent global support for a baseline set of principles for the development and use of AI and would lay out a path to leverage AI systems for good while managing the risks,” he said in a statement to The Associated Press.
If approved, Sullivan said, “this resolution will be an historic step forward in fostering safe, secure and trustworthy AI worldwide.”
The United States began negotiating with the 193 UN member nations about three months ago, spent hundreds of hours in direct talks with individual countries, 42 hours in negotiations and accepted input from 120 nations, a senior US official said.
The resolution went through several drafts and achieved consensus support from all member states this week and will be formally considered later this month, the official said, speaking on condition of anonymity because he was not authorised to speak publicly.
Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are an important barometre of world opinion.
A key goal, according to the draft resolution, is to use AI to help spur progress toward achieving the UN’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.
The draft resolution encourages all countries, regional and international organisations, technical communities, civil society, the media, academia, research institutions and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.
It warns against “improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law”.
New AI regulations
Lawmakers in the European Union are set to give final approval to the world’s first comprehensive AI rules on Wednesday. Countries around the world, including the US and China, or global groupings like the Group of 20 industrialised nations also are moving to draw up AI regulations.
The US draft calls on the 193 UN member states and others to assist developing countries in accessing the benefits of digital transformation and safe AI systems. It “emphasises that human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of artificial intelligence systems.”
US Ambassador Linda Thomas-Greenfield recalled President Joe Biden’s address to the General Assembly last year where he said emerging technologies, including AI, hold enormous potential.
She said the resolution, which is co-sponsored by dozens of countries, “aims to build international consensus on a shared approach to the design, development, deployment and use of AI systems,” particularly to support the 2030 UN goals.
The resolution responds to “the profound implications of this technology,” Thomas-Greenfield said, and if adopted it will be “an historic step forward in fostering safe, security and trustworthy AI worldwide.”
https://ai-advy.com/wp-content/uploads/2024/04/1920x1080_cmsv2_e03f8338-e440-5fa8-83ee-35f6734c3c5a-8303326.webp10801920dr. Galia Manchevahttps://ai-advy.com/wp-content/uploads/2024/10/AiADVY-official-loog-mono-long-no-slogan@10x-300x76.pngdr. Galia Mancheva2024-03-17 23:32:512024-04-03 23:34:04Draft UN resolution on AI aims to make it ‘safe and trustworthy’
Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI systems according to a risk-based approach.
Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI according to a risk-based approach with an overwhelming majority. The law passed with 523 votes in favour, 46 against and 49 abstentions.
The act, which needed final endorsement after approval on political and technical level, will now most likely enter into force this May.
Parliament AI Act co-lead, Italian lawmaker Brando Benifei (S&D), described it as “a historic day” in a subsequent press conference.
“We have the first regulation in the world which puts a clear path for a safe and human centric development of AI. We have now got a text that reflects the parliament’s priorities,” he said.
“The main point now will be implementation and compliance of businesses and institutions. We are also working on further legislation for the next mandate such as a directive on conditions in the workplace and AI,” Benifei said.
His counterpart Dragoş Tudorache (Romania/Renew), told the same conference that the EU looks at partner countries to ensure a global impact of the rules. “We have to be open to work with others on how to promote these rules, and build a governance with like-minded parties,” he said.
Entry into force
Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.
The general-purpose AI rules will apply one year after entry into force, in May 2025, and the obligations for high-risk systems in three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission. It will now be up to the member states to set up national oversight agencies. The commission told Euronews that countries have 12 months to nominate these watchdogs.
In a response to today’s vote, Cecilia Bonefeld-Dahl, head of EU trade organisation Digital Europe, said that more needs to be done to keep companies based in Europe.
“Today, only 3% of the world’s AI unicorns come from the EU, with about 14 times more private investment in AI in the US and five times more in China. By 2030, the global AI market is expected to reach $1.5 trillion, and we need to ensure that European companies tap into that without getting tangled up in red tape,” Bonefeld-Dahl said.
Ursula Pachl, Deputy Director General of the European Consumer Organisation (BEUC), welcomed the approval of the law and said it will help consumers to join collective redress claims if they have been harmed by the same AI system.
“Although the legislation should have gone further to protect consumers, the top priority for the European Commission and national governments should now be to show they are serious about the AI Act by implementing it without delay and providing the relevant regulators that will enforce it with the necessary resources,” Pachl said.
https://ai-advy.com/wp-content/uploads/2024/04/euro-parliament.jpg400600dr. Galia Manchevahttps://ai-advy.com/wp-content/uploads/2024/10/AiADVY-official-loog-mono-long-no-slogan@10x-300x76.pngdr. Galia Mancheva2024-03-16 23:27:182024-04-03 23:30:05EU Policy. Lawmakers approve AI Act with overwhelming majority
Member states have 12 months to nominate national competent authorities tasked with compliance.
Europe’s AI Act, the world’s first attempt to regulate AI systems according to a risk-based approach, will get the final nod in the European Parliament tomorrow (13 March), paving the way for the rules to finally enter into force. However, when it comes to oversight; member states are still in the early stages of determining which regulator is best placed to oversee compliance.
Tomorrow, the lawmakers will vote on the text without the minor linguistic changes made by the lawyers during the translation phase. These also need to be formally approved, either by a separate vote in the April plenary or by a formal announcement. The rules will then be published in the EU official journal which is likely to happen in May.
Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.
In November, bans on prohibited practises specified in the AI Act will apply. The general-purpose AI rules will apply one year after entry into force, May 2025, and the obligations for high-risk systems in three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission.
A European Commission spokesperson told Euronews that the countries have 12 months to nominate relevant national competent authorities, and the commission “awaits notification of these appointments in due course”.
Recruitment
Spain was the first EU country to set up an Agency for the Supervision of Artificial Intelligence (AESIA) in A Coruña in 2023. In other countries such as the Netherlands, the data protection authority set up a department dealing with algorithms last year. The office currently has 12 employees and the expectation is that it will grow to 20 people this year.
In the case of Ireland, the Department of Enterprise, Trade and Employment will lead the development of the national implementation plan for the AI Act. A spokesperson told Euronews however that “as the EU legislative process has not yet completed, and the Act has not yet been adopted, it would be premature to speculate about the national arrangements for enforcement.”
The Luxembourg Department of Media, Telecommunications and Digital Policy said that the country is “working hard to implement the AI Act”.
“We are consulting with all the relevant stakeholders, first and foremost the existing authorities and regulators that will have a role to play in the new governance framework. An important aspect for us is an efficient coordination between regulators, as we want to make it as clear as possible for businesses and citizens to interact with the new legislation,” a spokesperson said.
Meanwhile, the commission also began its recruitment process for policy and technical jobs at the AI Office, which will help with delivering a harmonised approach of the rules across the member states via information exchanges. The deadline for applications is 27 March.
Trade organisations have warned about the lack of implementation and enforcement now that the rules are on the brink of entering into force. CCIA Europe, which represents Big Tech companies, previously warned that the implementation will be crucial “to not overburden companies” that try to innovate.
Digital Europe, representing both tech companies and national trade associations, called in the AI Act trilogue negotiations for a 48-month transitional period to ensure the whole ecosystem is ready, as well as for the timely availability of harmonised standards for industry to comply with the rules.
https://ai-advy.com/wp-content/uploads/2024/04/Richard-Drew.webp10801920dr. Galia Manchevahttps://ai-advy.com/wp-content/uploads/2024/10/AiADVY-official-loog-mono-long-no-slogan@10x-300x76.pngdr. Galia Mancheva2024-03-15 23:23:362024-04-03 23:31:15EU Policy. As the EU AI Act enters into force, focus shifts to countries’ oversight appointments
The Communication on boosting startups and innovation in trustworthy AI outlines concrete joint actions for the Commission and Member States to create global leadership on trustworthy AI.
Strategies for developing and adopting trustworthy AI
The 2021 review of the Coordinated Plan includes proposals to make the EU the place where artificial intelligence (AI) excellence thrives from the lab to the market by promoting research in AI, encouraging its uptake, and funding innovation. The Commission strategy for supporting the development and adoption of trustworthy AI solutions is a values-driven approach which covers the whole lifecycle and focuses on building an ecosystem of excellence. The key actions seek to:
foster collaboration between stakeholders through Public-Private Partnerships (PPP);
build and mobilise AI research capabilities;
provide facilities for developers and companies to test and experiment with their latest AI-based technologies in real-world environments;
develop a European marketplace for trustworthy AI solutions, connecting resources and services;
fund and scale innovative ideas and solutions for AI.
European partnership on AI, Data and Robotics
The European Partnership onAI, Data and Robotics Association (ADRA) was officially launched in June 2021 with the signature of the Memorandum of Understanding between the European Commission and the ADR association. ADRA was awarded 2,6 Billion Euros under the funding programmes Horizon 2020 (2014-2020) and Horizon Europe (2021-2027). This public-private partnership brings together communities from three key communities – AI, Data and Robotics – and five organisations: BDVA, CLAIRE, ELLIS, EurAI and euRobotics .
In 2024, ADRA counts more than 133 members, from startups, SMEs, big industry as well as research organisations and RTOs. In November 2023, ADRA organised its first event – the ADR Forum which focused on Generative AI. ADRA engages with many stakeholders (including the industry), to tackle the challenges of generative AI. This lead to the establishment of 2 taskforces in 2023:
GenAI – dedicated to addressing challenges related to generative AI.
Generative AI for Robotics – focused on generative AI applications in the field of robotics.
Getting world reference AI capabilities in Europe
The European Networks of Excellence in AI (NoEs) bring together the brightest AI minds in Europe to tackle challenging AI problems. NoEs benefit from funding under both Horizon 2020 and Horizon Europe and are driving research and development forward. These world-class networks encompass a significant portion of Europe’s top AI expertise. They foster synergy and a critical mass in European AI to help translating AI advancements into tangible real-world impacts across various domains.
As of 2023, the EU AI & Robotics NoEs community includes the following nine projects:
NoEs are strongly linked to ADRA. NoEs are the private side of a public-private partnership that plays a key role in bringing knowledge from the European research community to industry.
World-class testing and experimentation facilities
From January 2023, 4 sectorial testing and experimentation facilities (TEFs) have started providing European SMEs with the means to test their latest AI-based technologies (Technology Readiness Levels 6-8) in real or close-to-real conditions. TEF services involve support for full integration, the testing, experimentation, and validation of AI-based solutions in the sectors served by the four TEFs:
The TEFs are co-funded by the Commission and Member States with a total investment of over 220 million Euros and help innovators to test their close-to-market AI solutions before introducing them to the market.
In March 2024, the four Testing and Experimentation Facilities (TEFs) held their official event, signalling their readiness to engage with businesses. These TEFs provide technical assistance to AI innovators, allowing them to evaluate and experiment with their cutting-edge AI software and hardware in real-world scenarios (specifically, at Technology Readiness Levels 6-8). The services offered encompass full integration support, as well as testing, experimentation, and validation of AI-based solutions across the sectors served by these four TEFs:
The AI on demand platform aims at building a bridge between European AI research, industry and public services. It is a catalyst to light up collaboration between academia and businesses but also among other stakeholders, reducing the gap between AI research and innovators. The European AI-on-Demand wants to enable the latest AI advances to be rapidly transformed into real AI products and services deployed into the market.
The platform is supported by an ecosystem of projects funded under Horizon 2020 and Horizon Europe (focus on research), and Digital Europe programme (focus on industry and public administration). The current version of the platform includes several services:
An assets catalogue
A powerful Virtual AI lab where researchers can explore the contents related to AIoD (Artificial Intelligence of Data) across multiple platforms
The ability to create their own libraries
Development, training, and sharing of AI pipelines
An extended and adapted marketplace catering to industry needs in upcoming versions.
European Digital Innovation Hubs
A network of more than 200 European Digital Innovation Hubs (EDIHs) covering all regions of Europe aims to stimulate the broad uptake of AI, HPC, cybersecurity and other digital technologies. EDIHs help all companies seeking to use AI technologies to become more competitive on business/production processes, products or services. As ‘one-stop-shops’, EDIHs can promote locally the use of tools from the AI-on-demand platform to industry, including small and medium-sized enterprises, and the public sector. They will assist companies to innovate their new products and services with AI and stimulate adoption by helping to make them market ready.
At present, there are around 150 EDIHs speciliased in AI and decision support. Several trainings and webinars on the topic of AI (including generative AI), are frequently organised. Also, a dedicated working group on AI in publicadministrationwas created.
Fund and scale innovative ideas and solutions for AI
Through the Horizon Europe and Digital Europe programmes, the Commission plans to invest €1 billion per year in AI, with measures to fund and scale innovative ideas and solutions. This will build bridges between Europe’s strong AI research community and innovators, in particular start-ups and SMEs both in their early stages and in the scale-up phase. The aim is to mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of this decade. The Recovery and Resilience Facility, the largest stimulus package ever financed through the EU budget, makes €134 billion available for digital and allows Europe to amplify its ambitions and become global leader in developing cutting-edge trustworthy AI.
Europe moves closer to adopting world’s first AI rules as EU lawmakers endorse provisional agreement.
The European Parliament has given final approval to wide-ranging rules to govern artificial intelligence.
The far-reaching regulation – the Artificial Intelligence Act – was passed by lawmakers on Wednesday. Senior European Union officials said the rules, first proposed in 2021, will protect citizens from the possible risks of a technology developing at breakneck speed while also fostering innovation.
Brussels has sprinted to pass the new law since Microsoft-backed OpenAI’s ChatGPT arrived on the scene in late 2022, unleashing a global AI race.
Just 46 lawmakers in the European Parliament in Strasbourg voted against the proposal. It won the support of 523 MEPs.
The European Council is expected to formally endorse the legislation by May. It will be fully applicable 24 months after its entry into force.
The rules will cover high-impact, general-purpose AI models and high-risk AI systems, which will have to comply with specific transparency obligations and EU copyright laws.
The act will regulate foundation models or generative AI, such as OpenAI, that are trained on large volumes of data to generate new content and perform tasks.
Government use of real-time biometric surveillance in public spaces will be restricted to cases of certain crimes; prevention of genuine threats, such as “terrorist” attacks; and searches for people suspected of the most serious crimes.
“Today is again an historic day on our long path towards regulation of AI,” said Brando Benifei, an Italian lawmaker who pushed the text through parliament with Romanian MEP Dragos Tudorache.
“[This is] the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI,” he said.
“We managed to find that very delicate balance between the interest to innovate and the interest to protect,” Tudorache told journalists.
The EU’s internal market commissioner, Thierry Breton, hailed the vote.
“I welcome the overwhelming support from the European Parliament for the EU AI Act,” he said. “Europe is now a global standard-setter in trustworthy AI.”
AI policing restrictions
The EU’s rules take a risk-based approach: the riskier the system, the tougher the requirements – with outright bans on the AI tools deemed to carry the most threat.
For example, high-risk AI providers must conduct risk assessments and ensure their products comply with the law before they are made available to the public.
“We are regulating as little as possible and as much as needed with proportionate measures for AI models,” Breton told the Agence France-Presse news agency.
Violations can see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2m to $38.2m), depending on the type of infringement and the firm’s size.
There are strict bans on using AI for predictive policing and systems that use biometric information to infer an individual’s race, religion or sexual orientation.
The rules also ban real-time facial recognition in public spaces with some exceptions for law enforcement. Police must seek approval from a judicial authority before any AI deployment.
Lobbies vs watchdogs
Because AI will likely transform every aspect of Europeans’ lives and big tech firms are vying for dominance in what will be a lucrative market, the EU has been subject to intense lobbying over the legislation.
Watchdogs have pointed to campaigning by French AI start-up Mistral AI and Germany’s Aleph Alpha as well as US-based tech giants like Google and Microsoft.
They warned the implementation of the new rules “could be further weakened by corporate lobbying”, adding that research showed “just how strong corporate influence” was during negotiations.
“Many details of the AI Act are still open and need to be clarified in numerous implementing acts, for example, with regard to standards, thresholds or transparency obligations,” three watchdogs based in Belgium, France and Germany said.
Breton stressed that the EU “withstood the special interests and lobbyists calling to exclude large AI models from the regulation”, maintaining: “The result is a balanced, risk-based and future-proof regulation.”
Tudorache said the law was “one of the … heaviest lobbied pieces of legislation, certainly in this mandate”, but insisted: “We resisted the pressure.”
The European AI Office will be the centre of AI expertise across the EU. It will play a key role in implementing the AI Act – especially for general-purpose AI – foster the development and use of trustworthy AI, and international cooperation.
The European AI Officewill support the development and use of trustworthy AI, while protecting against AI risks. The AI Office was established within the European Commissionas the centre of AI expertise and forms the foundation for a single European AI governance system.
The EU aims to ensure that AI is safe and trustworthy. For this purpose, the AI Act is the first-ever comprehensive legal framework on AI worldwide,guaranteeing the health, safety and fundamental rights of people, and providing legal certainty to businesses across the 27 Member States.
The AI Office is uniquely equipped to support the EU’s approach to AI.It will play a key role in implementing the AI Act by supporting the governance bodies in Member States in their tasks. It will enforce the rules for general-purpose AI models. This is underpinned by thepowers given to the Commissionby the AI Act, including the ability to conduct evaluations of general-purpose AI models, request information and measures from model providers, and apply sanctions. The AI Office also promotes an innovative ecosystem of trustworthy AI, to reap the societal and economic benefits. It will ensure a strategic, coherent and effective European approach on AI at the international level, becoming a global reference point.
For a well-informed decision-making, the AI Office collaborates with Member States and the wider expert community through dedicated fora and expert groups. These combine knowledge from the scientific community, industry, think tanks, civil society, and the open-source ecosystem, ensuring that their views and expertise are taken into account. Grounded in comprehensive insights of the AI ecosystem, including advances in capabilities, deployment and other trends, the AI Office fosters a thorough understanding of potential benefits and risks.
GenAI4EU
In January 2024, the Commission has launched an AI innovation package to support startups and SMEs in developing trustworthy AI that complies with EU values and rules. Both the ‘GenAI4EU’ initiative and the AI office were part of this package. Together they will contribute to the development of novel use cases and emerging applications in Europe’s 14 industrial ecosystems, as well as the public sector. Application areas include robotics, health, biotech, manufacturing, mobility, climate and virtual worlds.
Tasks of the AI Office
Supporting the AI Act and enforcing general-purpose AI rules
The AI Office makes use of its expertise to support the implementation of the AI Act by:
Contributing to the coherent application of the AI Act across the Member States, including the set-up of advisory bodies at EU level, facilitating support and information exchange
Developing tools, methodologies and benchmarks for evaluating capabilities and reach of general-purpose AI models, and classifying models with systemic risks
Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts
Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action
Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation
Strengthening the development and use of trustworthy AI
The Commission aims to foster trustworthy AI across the internal market.The AI Office, in collaboration with relevant public and private actors and the startup community, contributes to this by:
Advancing actions and policies to reap the societal and economic benefits of AI across the EU
Providing advice on best practices and enabling ready-accessto AI sandboxes, real-world testing and other European support structures for AI uptake
Encouraging innovative ecosystems of trustworthy AI to enhance the EU’s competitiveness and economic growth
Aiding the Commission in leveraging the use of transformative AI tools and reinforcing AI literacy
Fostering international cooperation
At international level, the AI Office contributes to a strategic, coherent, and effective EU approach, by:
Promoting the EU’s approach to trustworthy AI, including collaboration with similar institutions worldwide
Fostering international cooperation and governance on AI, with the aim of contributing to a global approach to AI
Supporting the development and implementation of international agreements on AI, including the support of Member States
To effectively carry out all tasks based on evidence and foresight, the AI Office continuously monitors the AI ecosystem, technological and market developments, but also the emergence of systemic risks and any other relevant trends.
Cooperation with institutions, experts and stakeholders
Collaboration with a diverse range of institutions, experts and stakeholders is essential for the work of the AI Office.
At an institutional level, the AI Office works closely with the European Artificial Intelligence Board formed by Member State representatives and the European Centre for Algorithmic Transparency (ECAT) of the Commission.
The Scientific Panel of independent experts ensures a strong link with the scientific community. Further technical expertise is gathered in an Advisory Forum,representing a balanced selection of stakeholders, including industry, startups and SMEs, academia, think tanks and civil society. The AI Office may also partner up with individual experts and organisations. It will also create fora for cooperation of providers of AI models and systems, including general-purpose AI, and similarly for the open-source community, to share best practices and contribute to the development of codes of conduct and codes of practice.
The AI Office will also oversee the AI Pact, which allows businesses to engage with the Commission and other stakeholders such as sharing best practices and joining activities. This engagement will start before the AI Act becomes applicable and will allow businesses to plan ahead and prepare for the implementation of the AI Act. All this will be part of the European AI Alliance, a Commission initiative, to establish an open policy dialogue on AI.
Further initiatives to foster trustworthy AI development and uptake within the EU are mapped on the Coordinated Plan on AI.
Job opportunities and collaboration
The AI Office is recruiting talents with a variety of backgrounds for policy, technical and legal work and administrative assistance. Find more information about the vacancies in the announcement. The deadline for expression of interest is 27 March 2024 at 12:00 CET. You can express your interest via the respective application form for technology specialists and administrative assistants.
Check the calls for expression of interest on EPSO website:
External experts and stakeholders will also have the chance to join dedicated fora, and to support the work of the AI Office, through a separate call for expression of interest.
https://ai-advy.com/wp-content/uploads/2024/03/AI-OFFICE-16-9.jpg450800dr. Galia Manchevahttps://ai-advy.com/wp-content/uploads/2024/10/AiADVY-official-loog-mono-long-no-slogan@10x-300x76.pngdr. Galia Mancheva2024-03-08 14:12:262024-03-28 14:30:48European AI Office
The Coordinated Plan on Artificial Intelligence aims to accelerate investment in AI, implement AI strategies and programmes and align AI policy to prevent fragmentation within Europe.
TheCoordinated Plan on Artificial Intelligence (AI) was published in 2018. It is a joint commitment between the Commission, EU Member States, Norway and Switzerland to maximise Europe’s potential to compete globally. The initial Plan defined actions and funding instruments for the uptake and development of AI across sectors. In parallel, Member States were encouraged to develop their own national strategies.
The Coordinated Plan of 2021 aims to turn strategy into action by prompting to:
accelerate investments in AI technologies to drive resilient economic and social recovery, aided by the uptake of new digital solutions
fully and promptly implement AI strategies and programs to ensure that the EU maximizes the advantages of being an early adopter.
align AI policy to remove fragmentation and address global challenges.
To achieve this, the updated plan establishes four key sets of policy objectives, supported by concrete actions. It also indicates possible funding mechanism, and establishes a timeline to:
The 2024 Communication on boosting startups and innovation in trustworthy AI builds on both the 2018 and 2021 coordinated action plans on AI. This shows a policy shift to Generative AI in response to the latest technological developments. Similarly, the adopted version of the AI act also includes provisions on Generative AI. These rules expand on the Commission’s original proposal from 2021, which aimed to build a trustworthy AI ecosystem for the present and future.
The 2024 Communication proposes:
a strategic investment framework to leverage the EU’s assets – such as supercomputing infrastructure – to foster an innovative European AI ecosystem.
collaboration between startups, innovators, and industrial users, aiming to attract investments to the EU and provide access to key AI components like data, computing power, algorithms, and talent.
actions and investments to support startups and industries in Europe to become global leaders in trustworthy advanced AI models, systems, and applications.
a package of measures (Under GenAI4EU) to support European startups and SMEs in developing trustworthy AI that adheres to EU values and regulations, including respecting privacy and data protection rules.
The Recovery and Resilience Facility provides an unprecedented opportunity to modernise and invest in AI. Through this the EU can become a global leader in the development and uptake of human-centric, trustworthy, secure and sustainable AI technologies. By September 2023, it had already invested 4.4 billion Euros into AI. More information can be found in the Report Mapping EU level funding instruments to Digital Decade targets.
The actions outlined in the plans have been actively implemented by both the Commission and Member States and progress was made in all chapters. Notably, the EU is fostering critical computing capacity through several successful actions:
The Chips Act establishes a legislative foundation to enhance the semiconductor industry’s resilience.
Together, these initiatives create a synergistic ecosystem for advancing microelectronics and computing capacity in Europe. The Commission is also monitoring and assessing the progress of these actions and will – in collaboration with Member States – report on the monitoring during 2024.
Overall, the first two years of implementation confirmed that joint actions and structured cooperation between Member States and the Commission are key to the EU’s global competitiveness and leadership in AI development and uptake. Most Member States have adopted national AI strategies and started to implement them. Investments in AI have increased, and the EU was able to mobilise critical resources to support these processes.