Posts

Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could

“The EU’s AI Act needs to be closely monitored to avoid overburdening innovative AI developers… with unnecessary red tape,” one industry body warns.

 

Europe’s growing generative artificial intelligence (GenAI) landscape is highly competitive, but additional regulation could stifle innovation, a new study has found.

The report by Copenhagen Economics said that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The study comes as EU regulators try to tighten rules on regulating competition in the AI market with its Digital Markets Act, EU AI Act, and AI office.

The Computer & Communications Industry Association (CCIA Europe), which commissioned the independent report, warned that regulatory intervention would be premature, slow down innovation and growth, and reduce consumer choice in generative AI.

“Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants,” said Aleksandra Zuchowska, CCIA Europe’s Competition Policy Manager.

“Instead, the impact of new AI-specific rules, such as the EU’s recently adopted AI Act, needs to be closely monitored to avoid overburdening innovative AI developers with disproportionate compliance costs and unnecessary red tape”.

The authors of the study noted that there are a growing number of foundation model developers active in the EU, such as Mistral AI and Aleph Alph and recognised that Europe’s GenAI sector is vibrant.

Competition concerns

But while they said there were no competition concerns in the short-term, there may be some in the near future, which include uncertainty for GenAI start-ups as they face challenges in growing and regulatory costs, such as from the EU AI Act.

The study also warned there are potential competition concerns, which include limited access to data, partnerships between large companies and smaller ones, and leveraging behaviour by big companies.

France’s Mistral AI is a prime example of a start-up that signed a partnership with Big Tech after it made its large language model (LLM) available to Microsoft Azure customers in February and gave Microsoft a minor stake in the AI company.

The study noted that if a larger partner uses its market power to exercise decisive control over a start-up or gain privileged or exclusive access to its technology, it could harm competition.

But it said partnerships are less likely to create competition concerns if there are no or limited exclusivity conditions and limited privileged access to the startup’s valuable technological assets.

Source: https://www.euronews.com/next/2024/03/22/could-the-new-eu-ai-act-stifle-genai-innovation-in-europe-a-new-study-says-it-could

EU parliament greenlights landmark artificial intelligence regulations

Europe moves closer to adopting world’s first AI rules as EU lawmakers endorse provisional agreement.

The European Parliament has given final approval to wide-ranging rules to govern artificial intelligence.

The far-reaching regulation – the Artificial Intelligence Act – was passed by lawmakers on Wednesday. Senior European Union officials said the rules, first proposed in 2021, will protect citizens from the possible risks of a technology developing at breakneck speed while also fostering innovation.

Brussels has sprinted to pass the new law since Microsoft-backed OpenAI’s ChatGPT arrived on the scene in late 2022, unleashing a global AI race.

Just 46 lawmakers in the European Parliament in Strasbourg voted against the proposal. It won the support of 523 MEPs.

The European Council is expected to formally endorse the legislation by May. It will be fully applicable 24 months after its entry into force.

The rules will cover high-impact, general-purpose AI models and high-risk AI systems, which will have to comply with specific transparency obligations and EU copyright laws.

The act will regulate foundation models or generative AI, such as OpenAI, that are trained on large volumes of data to generate new content and perform tasks.

Government use of real-time biometric surveillance in public spaces will be restricted to cases of certain crimes; prevention of genuine threats, such as “terrorist” attacks; and searches for people suspected of the most serious crimes.

“Today is again an historic day on our long path towards regulation of AI,” said Brando Benifei, an Italian lawmaker who pushed the text through parliament with Romanian MEP Dragos Tudorache.

“[This is] the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI,” he said.

“We managed to find that very delicate balance between the interest to innovate and the interest to protect,” Tudorache told journalists.

The EU’s internal market commissioner, Thierry Breton, hailed the vote.

“I welcome the overwhelming support from the European Parliament for the EU AI Act,” he said. “Europe is now a global standard-setter in trustworthy AI.”

AI policing restrictions

The EU’s rules take a risk-based approach: the riskier the system, the tougher the requirements – with outright bans on the AI tools deemed to carry the most threat.

For example, high-risk AI providers must conduct risk assessments and ensure their products comply with the law before they are made available to the public.

“We are regulating as little as possible and as much as needed with proportionate measures for AI models,” Breton told the Agence France-Presse news agency.

Violations can see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2m to $38.2m), depending on the type of infringement and the firm’s size.

There are strict bans on using AI for predictive policing and systems that use biometric information to infer an individual’s race, religion or sexual orientation.

The rules also ban real-time facial recognition in public spaces with some exceptions for law enforcement. Police must seek approval from a judicial authority before any AI deployment.

Lobbies vs watchdogs

Because AI will likely transform every aspect of Europeans’ lives and big tech firms are vying for dominance in what will be a lucrative market, the EU has been subject to intense lobbying over the legislation.

Watchdogs have pointed to campaigning by French AI start-up Mistral AI and Germany’s Aleph Alpha as well as US-based tech giants like Google and Microsoft.

They warned the implementation of the new rules “could be further weakened by corporate lobbying”, adding that research showed “just how strong corporate influence” was during negotiations.

“Many details of the AI Act are still open and need to be clarified in numerous implementing acts, for example, with regard to standards, thresholds or transparency obligations,” three watchdogs based in Belgium, France and Germany said.

Breton stressed that the EU “withstood the special interests and lobbyists calling to exclude large AI models from the regulation”, maintaining: “The result is a balanced, risk-based and future-proof regulation.”

Tudorache said the law was “one of the … heaviest lobbied pieces of legislation, certainly in this mandate”, but insisted: “We resisted the pressure.”

Source: https://www.aljazeera.com/news/2024/3/13/eu-parliament-greenlights-landmark-artificial-intelligence-regulations

Sectorial AI Testing and Experimentation Facilities under the Digital Europe Programme

To make the EU the place where AI excellence thrives from the lab to the market, the European Union is setting up world-class Testing and Experimentation Facilities (TEFs) for AI.

Together with Member States, the Commission is co-funding the TEFs to support AI developers to bring trustworthy AI to the market more efficiently, and facilitate its uptake in Europe. TEFs are specialised large-scale reference sites open to all technology providers across Europe to test and experiment at scale state-of-the art AI solutions, including both soft-and hardware products and services, e.g. robots, in real-world environments.

These large-scale reference testing and experimentation facilities will offer a combination of physical and virtual facilities, in which technology providers can get support to test their latest AI-based soft-/hardware technologies in real-world environments. This will include support for full integration, testing and experimentation of latest AI-based technologies to solve issues/improve solutions in a given application sector, including validation and demonstration.

TEFs can also contribute to the implementation of the Artificial Intelligence Act by supporting regulatory sandboxes in cooperation with competent national authorities for supervised testing and experimentation.

TEFs will be an important part of building the AI ecosystem of excellence and trust to support Europe’s strategic leadership in AI.

The Digital Europe Programme 2023-2024 is proposing a Coordination and Support action (CSA), to apply a cross-sector perspective to all existing sectorial Testing and Experimentation Facilities (TEFs). the action was launched on the 25 April. For more on the information session see our event report page.

TEF Projects

The selected TEFs projects started on January 1st 2023. They focus on the following high-impact sectors:

  • Agri-Food: project “agrifoodTEF”
  • Healthcare: project “TEF-Health”
  • Manufacturing: project “AI-MATTERS”
  • Smart Cities & Communities: project “Citcom.AI”

Co-funding between the European Commission (through the Digital Europe Programme) and the Member States will support the TEFs for five years with budgets between EUR 40-60 million per project. On 27 June,  the European Commission along with Member States and 128 partners from research, industry, and public organisations launched their investment in the four projects.

Smart cities:

Artificial Intelligence Testing and Experimentation Facilities for Smart Cities & Communities: Citcom.AI

The new EU-wide network of permanent testing and experimentation facility (TEF) for smart cities and communities will help accelerate the development of trustworthy AI in Europe by giving companies access to test and try out AI-based products in real-world conditions.

By further developing and strengthening existing infrastructures and expertise, CitCom.ai provides reality lab-oriented conditions in test and experimental facilities, relevant for AI and robotics solutions targeting sustainable development of cities and communities. In doing so, Citcom.ai helps European cities and communities in the transition towards a greener and more digital Europe and in maintaining and developing their resilience and competitiveness.

CitCom.ai focuses on three overarching themes:

  • POWER targets changing energy systems and reducing energy consumption.
  • MOVE targets more efficient and greener transportation linked to logistics and mobility.
  • CONNECT serves citizens through local infrastructures and cross-sector services.

These areas support AI and robot-based innovations that promote solutions organised according to the three overarching themes for use cases such as:

  • POWER:  energy such as local district heating load forecasts; environmental solutions such as adaptive street lighting; cybersecurity, ethics & edge learn.
  • MOVE: urban machine learning algorithms such as predicting pedestrian flow, smart intersection such identifying road safety concerns, electro-mobility and autonomous driving.
  • CONNECT: pollution, greenhouse gas emissions and noise management, urban development management, water and water-waste management, integrated facility management, delivery management by drones and tourism management.

Citcom.AI is organised as three “super nodes” Nordic, Central and South with satellites and sub-nodes located across 11 countries in the European Union: Denmark, Sweden, Finland, the Netherlands, Belgium, Luxembourg, France, Germany, Spain, Poland and Itay. The consortium of 36 partners is coordinated by the Technical University of Denmark. Co-funded by the Digital Europe Programme, the 5-year project started in January with an overall budget of €40 million and is expected to achieve long-term financial sustainability.

Relevant Link:

Join us in building the European way of Digital Transformation for 300 million Europeans | Living in EU (living-in.eu)

TEF-Health:

Artificial Intelligence Testing and Experimentation Facilities for Health AI and Robotics: TEF-Health

The EU project TEF-Health is a network of real testing facilities, such as hospital platforms, both physical infrastructures and data and compute infrastructures, living labs, etc., and laboratory testing facilities that will offer to innovators to carry out tests and experiments of their AI and robotics solutions  in large-scale and sustainable real or realistic environments. The consortium is implementing evaluation activities that facilitate market access for trustworthy intelligent technologies, particularly by considering new regulatory requirements (certification, standardization, code of conduct, etc.). TEF- Health will ensure easy access to these evaluation resources (link with digital innovation hubs, etc.).

In doing so, TEF-Health contributes to increasing effectiveness, resilience, sustainability of EU health and care systemsreduce healthcare delivery inequalities in EU; and ensure compliance with legal, ethical, quality and interoperability standards.

A key component of an agile certification process are regulatory sandboxes where all relevant stakeholders can work together to create innovative testing and validation tools for trustworthy AI in medical devices for specific use-cases.

The use-cases are defined in four domains: 1) Neurotec, 2) Cancer, 3) CardioVascular and 4) Intensive Care.

The consortium comprises seven nodes in Germany, France, Sweden, Belgium, Portugal, Slovakia, Italy, two associated nodes in Finland and Czechia and the pan-EU structures EBRAINS AISBL, EITHealth and EHDS2 Pilot initiative. The consortium of 51 partners is coordinated by Charité – Universitätsmedizin Berlin. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.

Relevant links:

European Cancer Imaging Initiative | Shaping Europe’s digital future (europa.eu)

A cancer plan for Europe (europa.eu)

European Health Data Space (europa.eu)

European data strategy (europa.eu)

Agri-food:

Artificial Intelligence Testing and Experimentation Facilities for Agrifood Innovation: AgrifoodTEF

Built as a network of physical and digital facilities across Europe, the EU project agrifoodTEF provides services that help assess and validate third party AI and Robotics solutions in real-world conditions aiming to foster sustainable and efficient food production. AgrifoodTEF offers validation tools to innovators so they can develop their ideas into market products and services.

There are five impact sectors: arable farming (performance enhancement of autonomous driving vehicles), tree corps (optimisation of natural resources and inputs for Mediterranean crops), horticulture (finding the right nutrient balance as well as crop and yield quality), livestock farming (improvement of  sustainability in cow, pig and paltry farming) and food processing (traceability of production and supply chains).

The use cases include quality crops, agro-machinery, AI conformity assessment, agro ecology in controlled environments, co-creation in agrifood production, HPC for agrifood, AI for arable and farmland machinery, and new frontiers for sustainable farming in the North.

AgrifoodTEF is organised in national nodes (Italy, Germany and France) and satellite nodes (Poland, the Netherlands, Belgium, Sweden and Austria). The consortium of 29 partners is coordinated by Fondazione Bruno Kessler. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.

Relevant links:

Fark to Fork Strategy

Manufacturing:

Artificial Intelligence Testing and Experimentation Facilities for Manufacturing Innovation: AI-Matters

The AI-MATTERS project is building a network of physical and digital facilities across Europe where innovators can validate their solutions under real-life conditions. The EU-project contributes to increasing the resilience and the flexibility of the European manufacturing sector through the deployment of the latest developments in AI, robotics, smart and autonomous systems.

AI-MATTERS will provide an extensive catalogue of services to innovators in the following key topics: factory-level optimization, human-robot interaction, circular economy and adoption of emerging AI enabling technologies. All consortium members bring their expertise in manufacturing for different sectors such as automotive, space and mobility, textile, recycling, etc.

The AI-Matters network will provide testing and experimentation facilities from companies across Europe at eight locations in Denmark, France, Germany, Greece, Italy, the Netherlands, Spain and the Czech Republic. The consortium of 25 partners is coordinated by CEA-List. Co-funded by the Digital Europe Programme, the 5-year project started in January 2023 with an overall budget of €60 million and is expected to achieve long-term financial sustainability.

Source: https://digital-strategy.ec.europa.eu/en/activities/testing-and-experimentation-facilities#tab_1

A European approach to artificial intelligence

The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

The way we approach Artificial Intelligence (AI) will define the world we live in the future. To help building a resilient Europe for the Digital Decade, people and businesses should be able to enjoy the benefits of AI while feeling safe and protected.

The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into the European approach to excellence and trust through concrete rules and actions.

In April 2021, the Commission presented its AI package, including:

A European approach to excellence in AI

Fostering excellence in AI will strengthen Europe’s potential to compete globally.

The EU will achieve this by:

  1. enabling the development and uptake of AI in the EU;
  2. making the EU the place where AI thrives from the lab to the market;
  3. ensuring that AI works for people and is a force for good in society;
  4. building strategic leadership in high-impact sectors.

The Commission and Member States agreed to boost excellence in AI by joining forces on policy and investments. The 2021 review of the Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.

Maximising resources and coordinating investments is a critical component of AI excellence. Through the Horizon Europe and Digital Europe programmes, the Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade.

The Recovery and Resilience Facility makes €134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.

A European approach to trust in AI

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  1. European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;
  2. civil liability framework – adapting liability rules to the digital age and AI;
  3. a revision of sectoral safety legislation (e.g. Machinery RegulationGeneral Product Safety Directive).

European proposal for a legal framework on AI

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

Important milestones

  1. Source: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence?fbclid=IwAR28OIj2qLTS62b_JUhNoWXxVSYlD5gM20Xpn8G8VwsRFRULtmNEKI6EpSk

 

EU to invest €13.5 billion in research and innovation for 2023-2024

The Commission has adopted the main Horizon Europe work programme 2023-24, with around €13.5 billion to support researchers and innovators in Europe to pursue breakthrough solutions for environmental, energy, digital and geopolitical challenges.

As part of the broader EU €95.5 billion research and innovation programme, Horizon Europe, this funding will contribute to the EU reaching its climate goals, increasing energy resilience, and developing core digital technologies. It will also address targeted actions to support Ukraine, boost economic resilience and contribute to a sustainable recovery from the COVID-19 pandemic

It will help to achieve a stronger European research and innovation ecosystem, including through wider participation of researchers and innovators across Europe, greater mobility and funding for world class research infrastructures.

Delivering on climate action and digital transformation

€5.67 billion (over 42% of the work programme’s budget) is dedicated to reaching key climate action objectives, finding innovative solutions to reduce greenhouse gas emissions and adapting to climate change. €1.67 billion contributes to supporting biodiversity.

Over €4.5 billion will support the EU digital transition, including for the development of core digital technologies and encouraging their integration in our lives.

Extensive support will also be provided to the New European Bauhaus, which aims to show the benefits of the green transition in people’s daily lives and living spaces.

Supporting a safe, secure and resilient Europe

Nearly €970 million will be invested to help speed up the clean energy transition, in line with the REPowerEU Plan, and increase Europe’s energy independence from unreliable suppliers and volatile fossil fuels.

In 2023, the work programme will direct investments of more than €1 billion from NextGeneration EU towards Europe’s recovery from the economic and social damage caused by the COVID-19 pandemic. Moreover, it supportsresearch and innovation with €336 million to enhance pandemic preparedness and to respond to health emergencies. This is in line with the objectives of the European Health Emergency Preparedness and Response Authority (HERA).

It will also support critical infrastructures against physical and cyber threats to reinforce the EU resilience.

Targeted support to Ukraine

Targeted support to Ukraine is provided on top of the €70 million of dedicated measures already launched in 2022. New actions include reinforcing the access of researchers from Ukraine to European research infrastructures, continuing support to the health scientists from Ukraine, and supporting the climate-neutral reconstruction of several Ukrainian cities through the EU Mission for Climate-Neutral and Smart Cities.

Global challenges require global solutions

The Horizon Europe work programme 2023-2024 covers actions to support and strengthen international initiatives in renewable energies, food systems, global health, environmental observations and more. It builds on the ‘Africa Initiative’ and introduces the new Mediterranean Initiative’, responding to the new research and innovation agenda developed with the Union for the Mediterranean.

Concerning cooperation with China, the work programme will focus on tackling global challenges through two research flagship initiatives on Food, Agriculture, and Biotechnology and Climate Change and Biodiversity.

Openness to international cooperation is balanced with the need to safeguard EU interests in strategic areas, in particular to promote the EU’s open strategic autonomy and its technological leadership and competitiveness.

EU Missions

More than €600 million will be invested in the five EU Missions in 2023. This will support research and innovation, which is expected to result in, for example, better prepared local and regional authorities to face climate-related risks, the restoration of at least 25 000 km of free-flowing rivers, Climate City Contracts with 100 cities, the roll-out of soil monitoring programmes or optimise minimally-invasive diagnostic cancer interventions. The Commission expects missions to raise contributions from other funding sources, to reach an overall level of investment at the end of 2023 that surpasses investments made from Horizon Europe.

Next Steps

The first calls for proposals will open on the EU Funding & Tenders Portal on 7 December 2022. Horizon Europe Information Days targeting potential applicants are taking place between 6 December 2022 and 16 February 2023.

Background

The 2023-2024 Horizon Europe work programme is based on Horizon Europe’s Strategic Plan 2021-2024, adopted in March 2021. It was co-created with stakeholders, Member States and the European Parliament. On 1 December, the Commission launched the largest public consultation ever held on the past, present and future of the EU’s Horizon research and innovation programmes 2014-2027. It is open for 12 weeks and contributes to the final evaluation of Horizon 2020, the interim evaluation of Horizon Europe as well as laying the groundwork for the preparations of the Horizon Europe Strategic Plan 2025-2027.

For More Information

Horizon Europe Work Programme 2023-2024

Video of Mariya Gabriel, Commissioner for Innovation, Research, Culture, Education and Youth

Horizon Europe Factsheets

Horizon Europe

Horizon Europe Strategic Plan

Funding & Tenders portal – funding opportunities

 

Source: https://digital-strategy.ec.europa.eu/en/news/eu-invest-eu135-billion-research-and-innovation-2023-2024

Digital rights and principles: a digital transformation for EU citizens

The Commission welcomes the agreement reached with the Parliament and the Council on the European declaration on digital rights and principles. The declaration, proposed in January, establishes a clear reference point about the kind of human-centred digital transformation that the EU promotes and defends, at home and abroad.

It builds on key EU values and freedoms and will benefit all individuals and businesses. The declaration will also provide a guide for policymakers and companies when dealing with new technologies. The declaration focuses on six key areas: putting people at the centre of the digital transformation; solidarity and inclusion; freedom of choice; participation in digital life; safety and security; and sustainability.

Executive Vice-President for a Europe Fit for the Digital Age, Margrethe Vestager said:

The digital transformation is about ensuring that technologies are safe. That they work in our interests and respect our rights and values. The principles in the declaration of digital rights and principles will continue to be supported by EU legislation.

Commissioner Thierry Breton said:

The declaration on digital rights and principles will ensure Europe is the continent people look to for guidance in the digital transformation. It enshrines values we are already working towards, such as top-class connectivity, access to public services, and a safe digital world.

More information

Source: https://digital-strategy.ec.europa.eu/en/news/digital-rights-and-principles-digital-transformation-eu-citizens

big-data

Industrial applications of artificial intelligence and big data

The deployment of artificial intelligence (AI) is critical for the success of small and medium-sized enterprises (SMEs) in the EU. In industrial sectors in particular, AI solutions are becoming ever more important as they help to optimize production processes, predict machinery failures and develop more efficient smart services. European industry can also harness big data and the smart use of ICT to enhance productivity and performance, and pave the way for innovation. 

Critical industrial applications of AI for SMEs

We launched a study to explore the most critical AI applications to accelerate their uptake by SMEs within strategic European value chains. Small and medium enterprises (SMEs) struggle more than large companies to keep up with the pace of digital transformation and industrial transition in general. They face specific challenges that could hamper wide AI adoption, reducing overall economic benefits for the European economy.

The study finds that there is a sound base of existing EU and national policy and initiatives that promote the uptake of advanced technologies. Yet, the key to success is to maintain policy focus on strategic priorities and increase coordination among them. See the reports on artificial intelligence for more insights.

Reports on Artificial Intelligence – critical industrial applications

Background

Artificial intelligence (AI) is now a priority for businesses, but also for policymakers, academic research institutions and the broader public. AI techniques are expected to bring benefits for governments, citizens and businesses, including in the fight against Covid-19, enabling resilience and improving green, sustainable growth. At the same time, AI has the potential to disrupt and possibly displace business models as well as impact the way people live and work.

The study advocates that while AI’s incremental GDP impact is initially moderate (up to 1.8% of additional cumulative GDP growth by 2025), there is significant potential in the longer-term (up to 13.5% of cumulative GDP growth by 2030), with disparities between regions and different industries. However, the potential of AI will fully materialise if European businesses, and in particular SMEs, are properly supported in their AI transition and grasp the competitive advantages it can provide.

AI is likely to have the largest economic impact on

  • manufacturing and the Industrial Internet of Things – IIoT, with overall AI impact potential in Europe of up to €200 billion by 2030
  • mobility, with AI impact potential of €300 billion
  • smart health, with AI impact potential of €105 billion

Foresight analysis of the effects of AI and automation technologies on the European labour market demonstrates significant effects in at least four ways

  1. labour substitution (with capital) is likely to displace parts of the workforce
  2. investment in AI and AI-enabled product and service innovation may create new direct jobs
  3. wealth creation may create positive spillover effects for the economy
  4. AI could enable higher participation in global flows (data and trade), creating additional jobs

These topics were initially debated during the conference, ‘A European perspective on Artificial Intelligence: Paving the way for SMEs’ AI adoption in key industrial value chains‘ held in Brussels in February 2020, with over 200 stakeholders.

Business-to-business big data sharing and access

In spite of huge economic potential (see below), data sharing between companies has not taken off at sufficient scale. The Commission seeks to identify and address any undue hurdles hindering data sharing and the use of privately-held data by other companies, as announced in the February 2020 Communication, ‘A European strategy for data’.

On business-to-business (B2B) data sharing, we are deploying two big data pilot projects to explore the innovation potential and innovative business models created by sharing data between data-producing/controlling entities and third-party businesses, notably SMEs. These pilot projects are being carried out in two strategic value chains: smart health (where the aim is to use data on diabetes from healthcare providers) and automotive (where sharing in-vehicle data produced by connected vehicles will be examined). Both projects are part of the ‘Big data and B2B platforms: the next frontier for Europe’s industry and enterprises’ study being carried out from 2019 to 2021.

Background

By harnessing the intelligence of big data and digital platforms, European industries can enhance productivity and performance, increase profitability, strengthen their competitive advantage, reduce risk, and pave the way for innovation. According to the Big data and B2B platforms report by the Strategic Forum on Digital Entrepreneurship, industrial companies are expected to make 3.6% per year in cost reductions over the next five years by basing business decisions on big data analytics. The big data European economy is expected to grow almost three times by 2025, reaching an estimated €829 billion, or 5.8% of EU GDP.

Sourse: https://ec.europa.eu/growth/industry/policy/advanced-technologies/industrial-applications-artificial-intelligence-and-big-data_en

Regulatory framework proposal on Artificial Intelligence

The Commission is proposing the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs). 

The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together they guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.

Why do we need rules on AI?

The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can be used to solve many societal challenges, certain AI systems create risks that need to be addressed to avoid undesirable outcomes. 

For example, it is often not possible to find out why an AI system has made a decision or prediction and reached a certain outcome. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

The proposed rules will:

  • address risks specifically created by AI applications
  • propose a list of high-risk applications 
  • set clear requirements for AI systems for high risk applications
  • define specific obligations for AI users and providers of high risk applications
  • propose a conformity assessment before the AI system is put into service or placed on the market
  • propose enforcement after such an AI system is placed in the market
  • propose a governance structure at European and national level

A risk-based approach

source: digital-strategy.ec.europa.eu

Unacceptable risk: All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk; 
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams); 
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan); 
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market: 

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes; 
  • Logging of activity to ensure traceability of results
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; 
  • Clear and adequate information to the user; 
  • Appropriate human oversight measures to minimise risk; 
  • High level of robustness, security and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined  and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. 

Minimal risk: The proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category, where they represent minimal or no risk. 

How does it all work in practice for providers of high risk AI systems?

Once the AI system is on the market, authorities are in charge of the market surveillance, users ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.

Future-proof legislation

As AI is a fast evolving technology, the proposal is based on a future-proof approach, allowing rules to adapt to technological change. AI applications should remain trustworthy even after they have been placed on the market. This requires ongoing quality and risk management by providers. 

Next steps

Following the Commission’s proposal in April 2021, the regulation could enter into force in the second half of 2022 in a transitional period. In this period, standards would be mandated and developed, and the governance structures set up would be operational. The second half of 2024 is the earliest time the regulation could become applicable to operators with the standards ready and the first conformity assessments carried out.

Source: https://digital-strategy.ec.europa.eu/

Digital transformation: importance, benefits and EU policy

Learn how the EU is helping to shape a digital transformation in Europe to benefit people, companies and the environment.

The digital transformation is one of the EU’s priorities. The European Parliament is helping to shape the policies that will strengthen Europe’s capacities in new digital technologies, open new opportunities for businesses and consumers, support the EU’s green transition and help it to reach climate neutrality by 2050, support people’s digital skills and training for workers, and help digitalise public services, while ensuring the respect of basic rights and values.

In May 2021, Parliament adopted a report on shaping the digital future of Europe, calling on the European Commission to further tackle challenges posed by the digital transition and especially take advantage of the opportunities of the digital single market, improve the use of artificial intelligence and support digital innovation and skills.

What is digital transformation?
  • Digital transformation is the integration of digital technologies by companies and the impact of the technologies on society.
  • Digital platforms, the Internet of Things, cloud computing and artificial intelligence are among the technologies affecting …
  • … sectors from transport to energy, agri-food, telecommunications, financial services, factory production and health care, and transforming people’s lives.
  • Technologies could help to optimise production, reduce emissions and waste, boost companies’ competitive advantages and bring new services and products to consumers.

Funding of the EU’s digital priorities

Digital plays an essential role in all EU policies. The Covid crisis accentuated the need for a response that will benefit society and competitiveness in the long run. Digital solutions present important opportunities and are essential to ensuring Europe’s recovery and competitive position in the global economy.

The EU’s plan for economic recovery demands that member states allocate at least 20% of the €672.5 billion Recovery and Resilience Facility to digital transition. Investment programmes such as the research and innovation-centred Horizon Europe and infrastructure-centred Connecting Europe Facility allocate substantial amounts for digital advancements as well.

While the general EU policy is to endorse digital goals through all programmes, some investment programmes and new rules specifically aim to achieve them.

Digital Europe programme

In April 2021, Parliament adopted the Digital Europe programme, the EU’s first financial instrument focused specifically on bringing technology to businesses and people. It aims to invest in digital infrastructure so that strategic technologies can help boost Europe’s competitiveness and green transition, as well as ensure technological sovereignty. It will invest €7.6 billion in five areas: supercomputing (€2.2 billion), arfitifical intelligence (€2.1 billion), cybersecurity (€1.6 billion), advanced digital skills (€0.6 billion), and ensuring a wide use of digital technologies across the economy and society (€1.1 billion).

Online safety and platform economy

Online platforms are an important part of the economy and people’s lives. They present significant opportunities as marketplaces and are important communication channels. However, there also pose significant challenges.

The EU is working on new digital services legislation, aiming to foster competitiveness, innovation and growth, while boosting online security, tackling illegal content, and ensuring the protection of free speech, press freedom and democracy.

Read more on why and how the EU wants to regulate the platform economy

Among measures to ensure safety online, the Parliament adopted new rules to prevent the dissemination of terrorist content online in April 2021. MEPs are also considering rules on a new European cybersecurity centre. In May 2021, MEPs backed a new European cybersecurity centre and network that will increase Europe’s capacity against cyber threats.

Artificial intelligence and data strategy

Artificial intelligence (AI) could benefit people by imroving health care, making cars safer and  enabling tailored services. It can improve production processes and bring a competitive advantage to European businesses, including in sectors where EU companies already enjoy strong positions, such as the green and circular economy, machinery, farming and tourism.

To ensure Europe makes the most of AI’s potential, MEPs have accentuated the need for human-centric AI legislation, aimed at establishing a framework that will be trustworthy, can implement ethical standards, support jobs, help build competitive “AI made in Europe” and influence global standards. The Commission presented its proposal for AI regulation on 21 April 2021.

Read more on how MEPs want to regulate artificial intelligence

The success of AI development in Europe ilargely depends on a successful European data strategy. Parliament has stressed the potential of industrial and public data for EU companies and researchers and called for European data spaces, big data infrastructure and legislation that will contribute to trustworthiness.

More on what Parliament wants for the European data strategy

Digital skills and education

The Covid-19 pandemic has demonstrated how important digital skills are for work and interactions, but has also accentuated the digital skills gap and the need to increase digital education. The Parliament wants the European skills agenda to ensure people and businesses can take full advantage of technological advancements.

42% of EU citizens lack basic digital skills

Fair taxation of the digital economy

Most tax rules were established well before the digital economy existed. To reduce tax avoidance and make taxes fairer, MEPs are calling for a global minimum tax rate and new taxation rights that would allow more taxes to be paid where value is created and not where tax rates are lowest.

Artificial Intelligence: first quantitative study of its kind finds uptake by businesses across Europe is on the rise

The European Commission has published the first quantitative overview on the uptake of Artificial Intelligence (AI) technologies among European enterprises. This study will help monitor the adoption of AI in Member States and further assess the challenges faced by enterprises, for their internal organisation and externally.

AI uptake across European enterprises

The robust survey found that four in ten (42%) enterprises have adopted at least one AI technology, with a quarter of them having already adopted at least two. Almost twice the proportion of large enterprises (39%) use two or more AI technologies compared to micro (21%) and small enterprises (22%). A total of 18% have plans to adopt AI in the next two years, while 40% of the enterprises participating do not use AI, nor do they plan to in the future. Overall awareness of AI amongst companies is however high across the EU, standing at 78%.

Challenges to AI technology adoption across Europe

The study also found three key internal barriers that enterprises are facing when adopting AI technologies: 57% experienced difficulties in hiring new staff with the right skills, while just over half (52%) said the cost of adopting AI technology was a barrier for their enterprise. The cost of adapting operational processes was also one of the three key issues (49%). Reducing uncertainty can be beneficial, as enterprises find liability for potential damages (33%), data standardisation (33%) and regulatory obstacles (29%) to be major external challenges to AI adoption.

Next steps

The “European enterprise survey on the use of technologies based on artificial intelligence”, will be used to monitor the adoption of AI across Member States and to assess the obstacles and barriers in the use of AI. In addition, it will present an overview of AI-related skills in the workforce. It will also help the Commission to shape future policy initiatives in the field of AI.

Background

The study was carried out for the European Commission by the market research company Ipsos together with iCite. A robust survey instrument was designed and fielded in EU Member States, as well as Norway, Iceland and the UK. A total of 9640 enterprises took part between January and March 2020. The five key performance indicators measured by the survey were AI awareness, adoption, sourcing, external and internal obstacles to adoption. The study used Computer Assisted Telephone Interviewing to obtain representative country estimates.

Artificial intelligence has become an area of strategic importance and a key driver of economic development. It can bring solutions to many societal challenges from treating diseases to minimising the environmental impact of farming. However, socio-economic, legal and ethical impacts have to be carefully addressed.

Source: https://digital-strategy.ec.europa.eu/